status
stringclasses
1 value
repo_name
stringclasses
31 values
repo_url
stringclasses
31 values
issue_id
int64
1
104k
title
stringlengths
4
369
body
stringlengths
0
254k
issue_url
stringlengths
37
56
pull_url
stringlengths
37
54
before_fix_sha
stringlengths
40
40
after_fix_sha
stringlengths
40
40
report_datetime
timestamp[us, tz=UTC]
language
stringclasses
5 values
commit_datetime
timestamp[us, tz=UTC]
updated_file
stringlengths
4
188
file_content
stringlengths
0
5.12M
closed
ansible/ansible
https://github.com/ansible/ansible
75,971
add_host module cannot handle a variable in a condition
### Summary When I define `failed_when:` condition in the task using `add_host`, it caused an error that the variable was not found. ### Issue Type Bug Report ### Component Name task ### Ansible Version ```console $ ansible --version ansible [core 2.11.2] config file = /etc/ansible/ansible.cfg configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.8/site-packages/ansible ansible collection location = /var/lib/awx/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)] jinja version = 2.10.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed $ (nothing) ``` ### OS / Environment $ cat /etc/redhat-release Red Hat Enterprise Linux release 8.4 (Ootpa) ### Steps to Reproduce ### pattern 1: using `add_host` module ``` --- - hosts: localhost gather_facts: false vars: test: 1 tasks: - add_host: name: "myhost" groups: "mygroup" ansible_host: "host46" ansible_ssh_host: "192.168.100.100" failed_when: test == 2 ``` When I ran the above playbook, it failed with error as below even the variable 'test' was defined in the play. ``` $ ansible-playbook -i localhost, conditional.yml PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [add_host] ********************************************************************************************************************************************************************************** ERROR! The conditional check 'test == 2' failed. The error was: error while evaluating conditional (test == 2): 'test' is undefined ``` If I passed the variable as an extra_vars and specified to be true, it worked as expected. ``` $ ansible-playbook -i localhost, conditional.yml -e "{test: 2}" PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [add_host] ********************************************************************************************************************************************************************************** fatal: [localhost]: FAILED! => {"add_host": {"groups": ["mygroup"], "host_name": "myhost", "host_vars": {"ansible_host": "host46", "ansible_ssh_host": "192.168.100.100"}}, "changed": false, "failed_when_result": true} NO MORE HOSTS LEFT ******************************************************************************************************************************************************************************* PLAY RECAP *************************************************************************************************************************************************************************************** localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` On the other hand when "{test: 1}", it failed. ``` $ ansible-playbook -i localhost, conditional.yml -e "{test: 1}" PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [add_host] ********************************************************************************************************************************************************************************** ERROR! The conditional check 'test == 2' failed. The error was: error while evaluating conditional (test == 2): 'test' is undefined ``` ### pattern 2: using `debug` module, it worked as desired ``` --- - hosts: localhost gather_facts: false vars: test: 1 tasks: - debug: msg: "debug message" failed_when: test == 2 ``` It works. ``` $ ansible-playbook -i localhost, debug.yml PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [debug] ************************************************************************************************************************************************************************************* ok: [localhost] => { "msg": "debug message" } PLAY RECAP *************************************************************************************************************************************************************************************** localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` When passed `2`, it failed as expected. ``` $ ansible-playbook -i localhost, debug.yml -e "{test: 2}" PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [debug] ************************************************************************************************************************************************************************************* fatal: [localhost]: FAILED! => { "msg": "debug message" } PLAY RECAP *************************************************************************************************************************************************************************************** localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ### Expected Results The `pattern 1` should work as the `pattern 2` from the point of view of handling the conditional. ### Actual Results ```console (already pasted above) ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/75971
https://github.com/ansible/ansible/pull/71719
2749d9fbf9242a59ed87f46ea057d84f4768a93e
394d216922d70709248a60f58da300f1e70f5894
2021-10-08T06:47:15Z
python
2022-02-04T11:35:23Z
lib/ansible/executor/task_executor.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import os import pty import time import json import signal import subprocess import sys import termios import traceback from ansible import constants as C from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleConnectionFailure, AnsibleActionFail, AnsibleActionSkip from ansible.executor.task_result import TaskResult from ansible.executor.module_common import get_action_args_with_defaults from ansible.module_utils.parsing.convert_bool import boolean from ansible.module_utils.six import binary_type from ansible.module_utils._text import to_text, to_native from ansible.module_utils.connection import write_to_file_descriptor from ansible.playbook.conditional import Conditional from ansible.playbook.task import Task from ansible.plugins.loader import become_loader, cliconf_loader, connection_loader, httpapi_loader, netconf_loader, terminal_loader from ansible.template import Templar from ansible.utils.collection_loader import AnsibleCollectionConfig from ansible.utils.listify import listify_lookup_plugin_terms from ansible.utils.unsafe_proxy import to_unsafe_text, wrap_var from ansible.vars.clean import namespace_facts, clean_facts from ansible.utils.display import Display from ansible.utils.vars import combine_vars, isidentifier display = Display() RETURN_VARS = [x for x in C.MAGIC_VARIABLE_MAPPING.items() if 'become' not in x and '_pass' not in x] __all__ = ['TaskExecutor'] class TaskTimeoutError(BaseException): pass def task_timeout(signum, frame): raise TaskTimeoutError def remove_omit(task_args, omit_token): ''' Remove args with a value equal to the ``omit_token`` recursively to align with now having suboptions in the argument_spec ''' if not isinstance(task_args, dict): return task_args new_args = {} for i in task_args.items(): if i[1] == omit_token: continue elif isinstance(i[1], dict): new_args[i[0]] = remove_omit(i[1], omit_token) elif isinstance(i[1], list): new_args[i[0]] = [remove_omit(v, omit_token) for v in i[1]] else: new_args[i[0]] = i[1] return new_args class TaskExecutor: ''' This is the main worker class for the executor pipeline, which handles loading an action plugin to actually dispatch the task to a given host. This class roughly corresponds to the old Runner() class. ''' def __init__(self, host, task, job_vars, play_context, new_stdin, loader, shared_loader_obj, final_q): self._host = host self._task = task self._job_vars = job_vars self._play_context = play_context self._new_stdin = new_stdin self._loader = loader self._shared_loader_obj = shared_loader_obj self._connection = None self._final_q = final_q self._loop_eval_error = None self._task.squash() def run(self): ''' The main executor entrypoint, where we determine if the specified task requires looping and either runs the task with self._run_loop() or self._execute(). After that, the returned results are parsed and returned as a dict. ''' display.debug("in run() - task %s" % self._task._uuid) try: try: items = self._get_loop_items() except AnsibleUndefinedVariable as e: # save the error raised here for use later items = None self._loop_eval_error = e if items is not None: if len(items) > 0: item_results = self._run_loop(items) # create the overall result item res = dict(results=item_results) # loop through the item results and set the global changed/failed/skipped result flags based on any item. res['skipped'] = True for item in item_results: if 'changed' in item and item['changed'] and not res.get('changed'): res['changed'] = True if res['skipped'] and ('skipped' not in item or ('skipped' in item and not item['skipped'])): res['skipped'] = False if 'failed' in item and item['failed']: item_ignore = item.pop('_ansible_ignore_errors') if not res.get('failed'): res['failed'] = True res['msg'] = 'One or more items failed' self._task.ignore_errors = item_ignore elif self._task.ignore_errors and not item_ignore: self._task.ignore_errors = item_ignore # ensure to accumulate these for array in ['warnings', 'deprecations']: if array in item and item[array]: if array not in res: res[array] = [] if not isinstance(item[array], list): item[array] = [item[array]] res[array] = res[array] + item[array] del item[array] if not res.get('failed', False): res['msg'] = 'All items completed' if res['skipped']: res['msg'] = 'All items skipped' else: res = dict(changed=False, skipped=True, skipped_reason='No items in the list', results=[]) else: display.debug("calling self._execute()") res = self._execute() display.debug("_execute() done") # make sure changed is set in the result, if it's not present if 'changed' not in res: res['changed'] = False def _clean_res(res, errors='surrogate_or_strict'): if isinstance(res, binary_type): return to_unsafe_text(res, errors=errors) elif isinstance(res, dict): for k in res: try: res[k] = _clean_res(res[k], errors=errors) except UnicodeError: if k == 'diff': # If this is a diff, substitute a replacement character if the value # is undecodable as utf8. (Fix #21804) display.warning("We were unable to decode all characters in the module return data." " Replaced some in an effort to return as much as possible") res[k] = _clean_res(res[k], errors='surrogate_then_replace') else: raise elif isinstance(res, list): for idx, item in enumerate(res): res[idx] = _clean_res(item, errors=errors) return res display.debug("dumping result to json") res = _clean_res(res) display.debug("done dumping result, returning") return res except AnsibleError as e: return dict(failed=True, msg=wrap_var(to_text(e, nonstring='simplerepr')), _ansible_no_log=self._play_context.no_log) except Exception as e: return dict(failed=True, msg='Unexpected failure during module execution.', exception=to_text(traceback.format_exc()), stdout='', _ansible_no_log=self._play_context.no_log) finally: try: self._connection.close() except AttributeError: pass except Exception as e: display.debug(u"error closing connection: %s" % to_text(e)) def _get_loop_items(self): ''' Loads a lookup plugin to handle the with_* portion of a task (if specified), and returns the items result. ''' # get search path for this task to pass to lookup plugins self._job_vars['ansible_search_path'] = self._task.get_search_path() # ensure basedir is always in (dwim already searches here but we need to display it) if self._loader.get_basedir() not in self._job_vars['ansible_search_path']: self._job_vars['ansible_search_path'].append(self._loader.get_basedir()) templar = Templar(loader=self._loader, variables=self._job_vars) items = None loop_cache = self._job_vars.get('_ansible_loop_cache') if loop_cache is not None: # _ansible_loop_cache may be set in `get_vars` when calculating `delegate_to` # to avoid reprocessing the loop items = loop_cache elif self._task.loop_with: if self._task.loop_with in self._shared_loader_obj.lookup_loader: fail = True if self._task.loop_with == 'first_found': # first_found loops are special. If the item is undefined then we want to fall through to the next value rather than failing. fail = False loop_terms = listify_lookup_plugin_terms(terms=self._task.loop, templar=templar, loader=self._loader, fail_on_undefined=fail, convert_bare=False) if not fail: loop_terms = [t for t in loop_terms if not templar.is_template(t)] # get lookup mylookup = self._shared_loader_obj.lookup_loader.get(self._task.loop_with, loader=self._loader, templar=templar) # give lookup task 'context' for subdir (mostly needed for first_found) for subdir in ['template', 'var', 'file']: # TODO: move this to constants? if subdir in self._task.action: break setattr(mylookup, '_subdir', subdir + 's') # run lookup items = wrap_var(mylookup.run(terms=loop_terms, variables=self._job_vars, wantlist=True)) else: raise AnsibleError("Unexpected failure in finding the lookup named '%s' in the available lookup plugins" % self._task.loop_with) elif self._task.loop is not None: items = templar.template(self._task.loop) if not isinstance(items, list): raise AnsibleError( "Invalid data passed to 'loop', it requires a list, got this instead: %s." " Hint: If you passed a list/dict of just one element," " try adding wantlist=True to your lookup invocation or use q/query instead of lookup." % items ) return items def _run_loop(self, items): ''' Runs the task with the loop items specified and collates the result into an array named 'results' which is inserted into the final result along with the item for which the loop ran. ''' results = [] # make copies of the job vars and task so we can add the item to # the variables and re-validate the task with the item variable # task_vars = self._job_vars.copy() task_vars = self._job_vars loop_var = 'item' index_var = None label = None loop_pause = 0 extended = False templar = Templar(loader=self._loader, variables=self._job_vars) # FIXME: move this to the object itself to allow post_validate to take care of templating (loop_control.post_validate) if self._task.loop_control: loop_var = templar.template(self._task.loop_control.loop_var) index_var = templar.template(self._task.loop_control.index_var) loop_pause = templar.template(self._task.loop_control.pause) extended = templar.template(self._task.loop_control.extended) # This may be 'None',so it is templated below after we ensure a value and an item is assigned label = self._task.loop_control.label # ensure we always have a label if label is None: label = '{{' + loop_var + '}}' if loop_var in task_vars: display.warning(u"%s: The loop variable '%s' is already in use. " u"You should set the `loop_var` value in the `loop_control` option for the task" u" to something else to avoid variable collisions and unexpected behavior." % (self._task, loop_var)) ran_once = False no_log = False items_len = len(items) for item_index, item in enumerate(items): task_vars['ansible_loop_var'] = loop_var task_vars[loop_var] = item if index_var: task_vars['ansible_index_var'] = index_var task_vars[index_var] = item_index if extended: task_vars['ansible_loop'] = { 'allitems': items, 'index': item_index + 1, 'index0': item_index, 'first': item_index == 0, 'last': item_index + 1 == items_len, 'length': items_len, 'revindex': items_len - item_index, 'revindex0': items_len - item_index - 1, } try: task_vars['ansible_loop']['nextitem'] = items[item_index + 1] except IndexError: pass if item_index - 1 >= 0: task_vars['ansible_loop']['previtem'] = items[item_index - 1] # Update template vars to reflect current loop iteration templar.available_variables = task_vars # pause between loop iterations if loop_pause and ran_once: try: time.sleep(float(loop_pause)) except ValueError as e: raise AnsibleError('Invalid pause value: %s, produced error: %s' % (loop_pause, to_native(e))) else: ran_once = True try: tmp_task = self._task.copy(exclude_parent=True, exclude_tasks=True) tmp_task._parent = self._task._parent tmp_play_context = self._play_context.copy() except AnsibleParserError as e: results.append(dict(failed=True, msg=to_text(e))) continue # now we swap the internal task and play context with their copies, # execute, and swap them back so we can do the next iteration cleanly (self._task, tmp_task) = (tmp_task, self._task) (self._play_context, tmp_play_context) = (tmp_play_context, self._play_context) res = self._execute(variables=task_vars) task_fields = self._task.dump_attrs() (self._task, tmp_task) = (tmp_task, self._task) (self._play_context, tmp_play_context) = (tmp_play_context, self._play_context) # update 'general no_log' based on specific no_log no_log = no_log or tmp_task.no_log # now update the result with the item info, and append the result # to the list of results res[loop_var] = item res['ansible_loop_var'] = loop_var if index_var: res[index_var] = item_index res['ansible_index_var'] = index_var if extended: res['ansible_loop'] = task_vars['ansible_loop'] res['_ansible_item_result'] = True res['_ansible_ignore_errors'] = task_fields.get('ignore_errors') # gets templated here unlike rest of loop_control fields, depends on loop_var above try: res['_ansible_item_label'] = templar.template(label, cache=False) except AnsibleUndefinedVariable as e: res.update({ 'failed': True, 'msg': 'Failed to template loop_control.label: %s' % to_text(e) }) tr = TaskResult( self._host.name, self._task._uuid, res, task_fields=task_fields, ) if tr.is_failed() or tr.is_unreachable(): self._final_q.send_callback('v2_runner_item_on_failed', tr) elif tr.is_skipped(): self._final_q.send_callback('v2_runner_item_on_skipped', tr) else: if getattr(self._task, 'diff', False): self._final_q.send_callback('v2_on_file_diff', tr) self._final_q.send_callback('v2_runner_item_on_ok', tr) results.append(res) del task_vars[loop_var] # clear 'connection related' plugin variables for next iteration if self._connection: clear_plugins = { 'connection': self._connection._load_name, 'shell': self._connection._shell._load_name } if self._connection.become: clear_plugins['become'] = self._connection.become._load_name for plugin_type, plugin_name in clear_plugins.items(): for var in C.config.get_plugin_vars(plugin_type, plugin_name): if var in task_vars and var not in self._job_vars: del task_vars[var] self._task.no_log = no_log return results def _execute(self, variables=None): ''' The primary workhorse of the executor system, this runs the task on the specified host (which may be the delegated_to host) and handles the retry/until and block rescue/always execution ''' if variables is None: variables = self._job_vars templar = Templar(loader=self._loader, variables=variables) context_validation_error = None try: # TODO: remove play_context as this does not take delegation into account, task itself should hold values # for connection/shell/become/terminal plugin options to finalize. # Kept for now for backwards compatibility and a few functions that are still exclusive to it. # apply the given task's information to the connection info, # which may override some fields already set by the play or # the options specified on the command line self._play_context = self._play_context.set_task_and_variable_override(task=self._task, variables=variables, templar=templar) # fields set from the play/task may be based on variables, so we have to # do the same kind of post validation step on it here before we use it. self._play_context.post_validate(templar=templar) # now that the play context is finalized, if the remote_addr is not set # default to using the host's address field as the remote address if not self._play_context.remote_addr: self._play_context.remote_addr = self._host.address # We also add "magic" variables back into the variables dict to make sure # a certain subset of variables exist. self._play_context.update_vars(variables) except AnsibleError as e: # save the error, which we'll raise later if we don't end up # skipping this task during the conditional evaluation step context_validation_error = e # Evaluate the conditional (if any) for this task, which we do before running # the final task post-validation. We do this before the post validation due to # the fact that the conditional may specify that the task be skipped due to a # variable not being present which would otherwise cause validation to fail try: if not self._task.evaluate_conditional(templar, variables): display.debug("when evaluation is False, skipping this task") return dict(changed=False, skipped=True, skip_reason='Conditional result was False', _ansible_no_log=self._play_context.no_log) except AnsibleError as e: # loop error takes precedence if self._loop_eval_error is not None: # Display the error from the conditional as well to prevent # losing information useful for debugging. display.v(to_text(e)) raise self._loop_eval_error # pylint: disable=raising-bad-type raise # Not skipping, if we had loop error raised earlier we need to raise it now to halt the execution of this task if self._loop_eval_error is not None: raise self._loop_eval_error # pylint: disable=raising-bad-type # if we ran into an error while setting up the PlayContext, raise it now, unless is known issue with delegation if context_validation_error is not None and not (self._task.delegate_to and isinstance(context_validation_error, AnsibleUndefinedVariable)): raise context_validation_error # pylint: disable=raising-bad-type # if this task is a TaskInclude, we just return now with a success code so the # main thread can expand the task list for the given host if self._task.action in C._ACTION_ALL_INCLUDE_TASKS: include_args = self._task.args.copy() include_file = include_args.pop('_raw_params', None) if not include_file: return dict(failed=True, msg="No include file was specified to the include") include_file = templar.template(include_file) return dict(include=include_file, include_args=include_args) # if this task is a IncludeRole, we just return now with a success code so the main thread can expand the task list for the given host elif self._task.action in C._ACTION_INCLUDE_ROLE: include_args = self._task.args.copy() return dict(include_args=include_args) # Now we do final validation on the task, which sets all fields to their final values. try: self._task.post_validate(templar=templar) except AnsibleError: raise except Exception: return dict(changed=False, failed=True, _ansible_no_log=self._play_context.no_log, exception=to_text(traceback.format_exc())) if '_variable_params' in self._task.args: variable_params = self._task.args.pop('_variable_params') if isinstance(variable_params, dict): if C.INJECT_FACTS_AS_VARS: display.warning("Using a variable for a task's 'args' is unsafe in some situations " "(see https://docs.ansible.com/ansible/devel/reference_appendices/faq.html#argsplat-unsafe)") variable_params.update(self._task.args) self._task.args = variable_params if self._task.delegate_to: # use vars from delegated host (which already include task vars) instead of original host cvars = variables.get('ansible_delegated_vars', {}).get(self._task.delegate_to, {}) orig_vars = templar.available_variables else: # just use normal host vars cvars = orig_vars = variables templar.available_variables = cvars # get the connection and the handler for this execution if (not self._connection or not getattr(self._connection, 'connected', False) or self._play_context.remote_addr != self._connection._play_context.remote_addr): self._connection = self._get_connection(cvars, templar) else: # if connection is reused, its _play_context is no longer valid and needs # to be replaced with the one templated above, in case other data changed self._connection._play_context = self._play_context plugin_vars = self._set_connection_options(cvars, templar) templar.available_variables = orig_vars # TODO: eventually remove this block as this should be a 'consequence' of 'forced_local' modules # special handling for python interpreter for network_os, default to ansible python unless overriden if 'ansible_network_os' in cvars and 'ansible_python_interpreter' not in cvars: # this also avoids 'python discovery' cvars['ansible_python_interpreter'] = sys.executable # get handler self._handler = self._get_action_handler(connection=self._connection, templar=templar) # Apply default params for action/module, if present self._task.args = get_action_args_with_defaults( self._task.resolved_action, self._task.args, self._task.module_defaults, templar, action_groups=self._task._parent._play._action_groups ) # And filter out any fields which were set to default(omit), and got the omit token value omit_token = variables.get('omit') if omit_token is not None: self._task.args = remove_omit(self._task.args, omit_token) # Read some values from the task, so that we can modify them if need be if self._task.until: retries = self._task.retries if retries is None: retries = 3 elif retries <= 0: retries = 1 else: retries += 1 else: retries = 1 delay = self._task.delay if delay < 0: delay = 1 # make a copy of the job vars here, in case we need to update them # with the registered variable value later on when testing conditions vars_copy = variables.copy() display.debug("starting attempt loop") result = None for attempt in range(1, retries + 1): display.debug("running the handler") try: if self._task.timeout: old_sig = signal.signal(signal.SIGALRM, task_timeout) signal.alarm(self._task.timeout) result = self._handler.run(task_vars=variables) except (AnsibleActionFail, AnsibleActionSkip) as e: return e.result except AnsibleConnectionFailure as e: return dict(unreachable=True, msg=to_text(e)) except TaskTimeoutError as e: msg = 'The %s action failed to execute in the expected time frame (%d) and was terminated' % (self._task.action, self._task.timeout) return dict(failed=True, msg=msg) finally: if self._task.timeout: signal.alarm(0) old_sig = signal.signal(signal.SIGALRM, old_sig) self._handler.cleanup() display.debug("handler run complete") # preserve no log result["_ansible_no_log"] = self._play_context.no_log if self._task.action not in C._ACTION_WITH_CLEAN_FACTS: result = wrap_var(result) # update the local copy of vars with the registered value, if specified, # or any facts which may have been generated by the module execution if self._task.register: if not isidentifier(self._task.register): raise AnsibleError("Invalid variable name in 'register' specified: '%s'" % self._task.register) vars_copy[self._task.register] = result if self._task.async_val > 0: if self._task.poll > 0 and not result.get('skipped') and not result.get('failed'): result = self._poll_async_result(result=result, templar=templar, task_vars=vars_copy) if result.get('failed'): self._final_q.send_callback( 'v2_runner_on_async_failed', TaskResult(self._host.name, self._task._uuid, result, task_fields=self._task.dump_attrs())) else: self._final_q.send_callback( 'v2_runner_on_async_ok', TaskResult(self._host.name, self._task._uuid, result, task_fields=self._task.dump_attrs())) # ensure no log is preserved result["_ansible_no_log"] = self._play_context.no_log # helper methods for use below in evaluating changed/failed_when def _evaluate_changed_when_result(result): if self._task.changed_when is not None and self._task.changed_when: cond = Conditional(loader=self._loader) cond.when = self._task.changed_when result['changed'] = cond.evaluate_conditional(templar, vars_copy) def _evaluate_failed_when_result(result): if self._task.failed_when: cond = Conditional(loader=self._loader) cond.when = self._task.failed_when failed_when_result = cond.evaluate_conditional(templar, vars_copy) result['failed_when_result'] = result['failed'] = failed_when_result else: failed_when_result = False return failed_when_result if 'ansible_facts' in result and self._task.action not in C._ACTION_DEBUG: if self._task.action in C._ACTION_WITH_CLEAN_FACTS: if self._task.delegate_to and self._task.delegate_facts: if '_ansible_delegated_vars' in vars_copy: vars_copy['_ansible_delegated_vars'].update(result['ansible_facts']) else: vars_copy['_ansible_delegated_vars'] = result['ansible_facts'] else: vars_copy.update(result['ansible_facts']) else: # TODO: cleaning of facts should eventually become part of taskresults instead of vars af = wrap_var(result['ansible_facts']) vars_copy['ansible_facts'] = combine_vars(vars_copy.get('ansible_facts', {}), namespace_facts(af)) if C.INJECT_FACTS_AS_VARS: vars_copy.update(clean_facts(af)) # set the failed property if it was missing. if 'failed' not in result: # rc is here for backwards compatibility and modules that use it instead of 'failed' if 'rc' in result and result['rc'] not in [0, "0"]: result['failed'] = True else: result['failed'] = False # Make attempts and retries available early to allow their use in changed/failed_when if self._task.until: result['attempts'] = attempt # set the changed property if it was missing. if 'changed' not in result: result['changed'] = False if self._task.action not in C._ACTION_WITH_CLEAN_FACTS: result = wrap_var(result) # re-update the local copy of vars with the registered value, if specified, # or any facts which may have been generated by the module execution # This gives changed/failed_when access to additional recently modified # attributes of result if self._task.register: vars_copy[self._task.register] = result # if we didn't skip this task, use the helpers to evaluate the changed/ # failed_when properties if 'skipped' not in result: try: condname = 'changed' _evaluate_changed_when_result(result) condname = 'failed' _evaluate_failed_when_result(result) except AnsibleError as e: result['failed'] = True result['%s_when_result' % condname] = to_text(e) if retries > 1: cond = Conditional(loader=self._loader) cond.when = self._task.until if cond.evaluate_conditional(templar, vars_copy): break else: # no conditional check, or it failed, so sleep for the specified time if attempt < retries: result['_ansible_retry'] = True result['retries'] = retries display.debug('Retrying task, attempt %d of %d' % (attempt, retries)) self._final_q.send_callback( 'v2_runner_retry', TaskResult( self._host.name, self._task._uuid, result, task_fields=self._task.dump_attrs() ) ) time.sleep(delay) self._handler = self._get_action_handler(connection=self._connection, templar=templar) else: if retries > 1: # we ran out of attempts, so mark the result as failed result['attempts'] = retries - 1 result['failed'] = True if self._task.action not in C._ACTION_WITH_CLEAN_FACTS: result = wrap_var(result) # do the final update of the local variables here, for both registered # values and any facts which may have been created if self._task.register: variables[self._task.register] = result if 'ansible_facts' in result and self._task.action not in C._ACTION_DEBUG: if self._task.action in C._ACTION_WITH_CLEAN_FACTS: if self._task.delegate_to and self._task.delegate_facts: if '_ansible_delegated_vars' in variables: variables['_ansible_delegated_vars'].update(result['ansible_facts']) else: variables['_ansible_delegated_vars'] = result['ansible_facts'] else: variables.update(result['ansible_facts']) else: # TODO: cleaning of facts should eventually become part of taskresults instead of vars af = wrap_var(result['ansible_facts']) variables['ansible_facts'] = combine_vars(variables.get('ansible_facts', {}), namespace_facts(af)) if C.INJECT_FACTS_AS_VARS: variables.update(clean_facts(af)) # save the notification target in the result, if it was specified, as # this task may be running in a loop in which case the notification # may be item-specific, ie. "notify: service {{item}}" if self._task.notify is not None: result['_ansible_notify'] = self._task.notify # add the delegated vars to the result, so we can reference them # on the results side without having to do any further templating # also now add conneciton vars results when delegating if self._task.delegate_to: result["_ansible_delegated_vars"] = {'ansible_delegated_host': self._task.delegate_to} for k in plugin_vars: result["_ansible_delegated_vars"][k] = cvars.get(k) # note: here for callbacks that rely on this info to display delegation for requireshed in ('ansible_host', 'ansible_port', 'ansible_user', 'ansible_connection'): if requireshed not in result["_ansible_delegated_vars"] and requireshed in cvars: result["_ansible_delegated_vars"][requireshed] = cvars.get(requireshed) # and return display.debug("attempt loop complete, returning result") return result def _poll_async_result(self, result, templar, task_vars=None): ''' Polls for the specified JID to be complete ''' if task_vars is None: task_vars = self._job_vars async_jid = result.get('ansible_job_id') if async_jid is None: return dict(failed=True, msg="No job id was returned by the async task") # Create a new pseudo-task to run the async_status module, and run # that (with a sleep for "poll" seconds between each retry) until the # async time limit is exceeded. async_task = Task.load(dict(action='async_status', args={'jid': async_jid}, environment=self._task.environment)) # FIXME: this is no longer the case, normal takes care of all, see if this can just be generalized # Because this is an async task, the action handler is async. However, # we need the 'normal' action handler for the status check, so get it # now via the action_loader async_handler = self._shared_loader_obj.action_loader.get( 'ansible.legacy.async_status', task=async_task, connection=self._connection, play_context=self._play_context, loader=self._loader, templar=templar, shared_loader_obj=self._shared_loader_obj, ) time_left = self._task.async_val while time_left > 0: time.sleep(self._task.poll) try: async_result = async_handler.run(task_vars=task_vars) # We do not bail out of the loop in cases where the failure # is associated with a parsing error. The async_runner can # have issues which result in a half-written/unparseable result # file on disk, which manifests to the user as a timeout happening # before it's time to timeout. if (int(async_result.get('finished', 0)) == 1 or ('failed' in async_result and async_result.get('_ansible_parsed', False)) or 'skipped' in async_result): break except Exception as e: # Connections can raise exceptions during polling (eg, network bounce, reboot); these should be non-fatal. # On an exception, call the connection's reset method if it has one # (eg, drop/recreate WinRM connection; some reused connections are in a broken state) display.vvvv("Exception during async poll, retrying... (%s)" % to_text(e)) display.debug("Async poll exception was:\n%s" % to_text(traceback.format_exc())) try: async_handler._connection.reset() except AttributeError: pass # Little hack to raise the exception if we've exhausted the timeout period time_left -= self._task.poll if time_left <= 0: raise else: time_left -= self._task.poll self._final_q.send_callback( 'v2_runner_on_async_poll', TaskResult( self._host.name, async_task._uuid, async_result, task_fields=async_task.dump_attrs(), ), ) if int(async_result.get('finished', 0)) != 1: if async_result.get('_ansible_parsed'): return dict(failed=True, msg="async task did not complete within the requested time - %ss" % self._task.async_val, async_result=async_result) else: return dict(failed=True, msg="async task produced unparseable results", async_result=async_result) else: # If the async task finished, automatically cleanup the temporary # status file left behind. cleanup_task = Task.load( { 'async_status': { 'jid': async_jid, 'mode': 'cleanup', }, 'environment': self._task.environment, } ) cleanup_handler = self._shared_loader_obj.action_loader.get( 'ansible.legacy.async_status', task=cleanup_task, connection=self._connection, play_context=self._play_context, loader=self._loader, templar=templar, shared_loader_obj=self._shared_loader_obj, ) cleanup_handler.run(task_vars=task_vars) cleanup_handler.cleanup(force=True) async_handler.cleanup(force=True) return async_result def _get_become(self, name): become = become_loader.get(name) if not become: raise AnsibleError("Invalid become method specified, could not find matching plugin: '%s'. " "Use `ansible-doc -t become -l` to list available plugins." % name) return become def _get_connection(self, cvars, templar): ''' Reads the connection property for the host, and returns the correct connection object from the list of connection plugins ''' # use magic var if it exists, if not, let task inheritance do it's thing. if cvars.get('ansible_connection') is not None: self._play_context.connection = templar.template(cvars['ansible_connection']) else: self._play_context.connection = self._task.connection # TODO: play context has logic to update the connection for 'smart' # (default value, will chose between ssh and paramiko) and 'persistent' # (really paramiko), eventually this should move to task object itself. connection_name = self._play_context.connection # load connection conn_type = connection_name connection, plugin_load_context = self._shared_loader_obj.connection_loader.get_with_context( conn_type, self._play_context, self._new_stdin, task_uuid=self._task._uuid, ansible_playbook_pid=to_text(os.getppid()) ) if not connection: raise AnsibleError("the connection plugin '%s' was not found" % conn_type) # load become plugin if needed if cvars.get('ansible_become') is not None: become = boolean(templar.template(cvars['ansible_become'])) else: become = self._task.become if become: if cvars.get('ansible_become_method'): become_plugin = self._get_become(templar.template(cvars['ansible_become_method'])) else: become_plugin = self._get_become(self._task.become_method) try: connection.set_become_plugin(become_plugin) except AttributeError: # Older connection plugin that does not support set_become_plugin pass if getattr(connection.become, 'require_tty', False) and not getattr(connection, 'has_tty', False): raise AnsibleError( "The '%s' connection does not provide a TTY which is required for the selected " "become plugin: %s." % (conn_type, become_plugin.name) ) # Backwards compat for connection plugins that don't support become plugins # Just do this unconditionally for now, we could move it inside of the # AttributeError above later self._play_context.set_become_plugin(become_plugin.name) # Also backwards compat call for those still using play_context self._play_context.set_attributes_from_plugin(connection) if any(((connection.supports_persistence and C.USE_PERSISTENT_CONNECTIONS), connection.force_persistence)): self._play_context.timeout = connection.get_option('persistent_command_timeout') display.vvvv('attempting to start connection', host=self._play_context.remote_addr) display.vvvv('using connection plugin %s' % connection.transport, host=self._play_context.remote_addr) options = self._get_persistent_connection_options(connection, cvars, templar) socket_path = start_connection(self._play_context, options, self._task._uuid) display.vvvv('local domain socket path is %s' % socket_path, host=self._play_context.remote_addr) setattr(connection, '_socket_path', socket_path) return connection def _get_persistent_connection_options(self, connection, final_vars, templar): option_vars = C.config.get_plugin_vars('connection', connection._load_name) plugin = connection._sub_plugin if plugin.get('type'): option_vars.extend(C.config.get_plugin_vars(plugin['type'], plugin['name'])) options = {} for k in option_vars: if k in final_vars: options[k] = templar.template(final_vars[k]) return options def _set_plugin_options(self, plugin_type, variables, templar, task_keys): try: plugin = getattr(self._connection, '_%s' % plugin_type) except AttributeError: # Some plugins are assigned to private attrs, ``become`` is not plugin = getattr(self._connection, plugin_type) option_vars = C.config.get_plugin_vars(plugin_type, plugin._load_name) options = {} for k in option_vars: if k in variables: options[k] = templar.template(variables[k]) # TODO move to task method? plugin.set_options(task_keys=task_keys, var_options=options) return option_vars def _set_connection_options(self, variables, templar): # keep list of variable names possibly consumed varnames = [] # grab list of usable vars for this plugin option_vars = C.config.get_plugin_vars('connection', self._connection._load_name) varnames.extend(option_vars) # create dict of 'templated vars' options = {'_extras': {}} for k in option_vars: if k in variables: options[k] = templar.template(variables[k]) # add extras if plugin supports them if getattr(self._connection, 'allow_extras', False): for k in variables: if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options: options['_extras'][k] = templar.template(variables[k]) task_keys = self._task.dump_attrs() # The task_keys 'timeout' attr is the task's timeout, not the connection timeout. # The connection timeout is threaded through the play_context for now. task_keys['timeout'] = self._play_context.timeout if self._play_context.password: # The connection password is threaded through the play_context for # now. This is something we ultimately want to avoid, but the first # step is to get connection plugins pulling the password through the # config system instead of directly accessing play_context. task_keys['password'] = self._play_context.password # Prevent task retries from overriding connection retries del(task_keys['retries']) # set options with 'templated vars' specific to this plugin and dependent ones self._connection.set_options(task_keys=task_keys, var_options=options) varnames.extend(self._set_plugin_options('shell', variables, templar, task_keys)) if self._connection.become is not None: if self._play_context.become_pass: # FIXME: eventually remove from task and play_context, here for backwards compat # keep out of play objects to avoid accidental disclosure, only become plugin should have # The become pass is already in the play_context if given on # the CLI (-K). Make the plugin aware of it in this case. task_keys['become_pass'] = self._play_context.become_pass varnames.extend(self._set_plugin_options('become', variables, templar, task_keys)) # FOR BACKWARDS COMPAT: for option in ('become_user', 'become_flags', 'become_exe', 'become_pass'): try: setattr(self._play_context, option, self._connection.become.get_option(option)) except KeyError: pass # some plugins don't support all base flags self._play_context.prompt = self._connection.become.prompt return varnames def _get_action_handler(self, connection, templar): ''' Returns the correct action plugin to handle the requestion task action ''' module_collection, separator, module_name = self._task.action.rpartition(".") module_prefix = module_name.split('_')[0] if module_collection: # For network modules, which look for one action plugin per platform, look for the # action plugin in the same collection as the module by prefixing the action plugin # with the same collection. network_action = "{0}.{1}".format(module_collection, module_prefix) else: network_action = module_prefix collections = self._task.collections # let action plugin override module, fallback to 'normal' action plugin otherwise if self._shared_loader_obj.action_loader.has_plugin(self._task.action, collection_list=collections): handler_name = self._task.action elif all((module_prefix in C.NETWORK_GROUP_MODULES, self._shared_loader_obj.action_loader.has_plugin(network_action, collection_list=collections))): handler_name = network_action display.vvvv("Using network group action {handler} for {action}".format(handler=handler_name, action=self._task.action), host=self._play_context.remote_addr) else: # use ansible.legacy.normal to allow (historic) local action_plugins/ override without collections search handler_name = 'ansible.legacy.normal' collections = None # until then, we don't want the task's collection list to be consulted; use the builtin handler = self._shared_loader_obj.action_loader.get( handler_name, task=self._task, connection=connection, play_context=self._play_context, loader=self._loader, templar=templar, shared_loader_obj=self._shared_loader_obj, collection_list=collections ) if not handler: raise AnsibleError("the handler '%s' was not found" % handler_name) return handler def start_connection(play_context, variables, task_uuid): ''' Starts the persistent connection ''' candidate_paths = [C.ANSIBLE_CONNECTION_PATH or os.path.dirname(sys.argv[0])] candidate_paths.extend(os.environ.get('PATH', '').split(os.pathsep)) for dirname in candidate_paths: ansible_connection = os.path.join(dirname, 'ansible-connection') if os.path.isfile(ansible_connection): display.vvvv("Found ansible-connection at path {0}".format(ansible_connection)) break else: raise AnsibleError("Unable to find location of 'ansible-connection'. " "Please set or check the value of ANSIBLE_CONNECTION_PATH") env = os.environ.copy() env.update({ # HACK; most of these paths may change during the controller's lifetime # (eg, due to late dynamic role includes, multi-playbook execution), without a way # to invalidate/update, ansible-connection won't always see the same plugins the controller # can. 'ANSIBLE_BECOME_PLUGINS': become_loader.print_paths(), 'ANSIBLE_CLICONF_PLUGINS': cliconf_loader.print_paths(), 'ANSIBLE_COLLECTIONS_PATH': to_native(os.pathsep.join(AnsibleCollectionConfig.collection_paths)), 'ANSIBLE_CONNECTION_PLUGINS': connection_loader.print_paths(), 'ANSIBLE_HTTPAPI_PLUGINS': httpapi_loader.print_paths(), 'ANSIBLE_NETCONF_PLUGINS': netconf_loader.print_paths(), 'ANSIBLE_TERMINAL_PLUGINS': terminal_loader.print_paths(), }) python = sys.executable master, slave = pty.openpty() p = subprocess.Popen( [python, ansible_connection, to_text(os.getppid()), to_text(task_uuid)], stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env ) os.close(slave) # We need to set the pty into noncanonical mode. This ensures that we # can receive lines longer than 4095 characters (plus newline) without # truncating. old = termios.tcgetattr(master) new = termios.tcgetattr(master) new[3] = new[3] & ~termios.ICANON try: termios.tcsetattr(master, termios.TCSANOW, new) write_to_file_descriptor(master, variables) write_to_file_descriptor(master, play_context.serialize()) (stdout, stderr) = p.communicate() finally: termios.tcsetattr(master, termios.TCSANOW, old) os.close(master) if p.returncode == 0: result = json.loads(to_text(stdout, errors='surrogate_then_replace')) else: try: result = json.loads(to_text(stderr, errors='surrogate_then_replace')) except getattr(json.decoder, 'JSONDecodeError', ValueError): # JSONDecodeError only available on Python 3.5+ result = {'error': to_text(stderr, errors='surrogate_then_replace')} if 'messages' in result: for level, message in result['messages']: if level == 'log': display.display(message, log_only=True) elif level in ('debug', 'v', 'vv', 'vvv', 'vvvv', 'vvvvv', 'vvvvvv'): getattr(display, level)(message, host=play_context.remote_addr) else: if hasattr(display, level): getattr(display, level)(message) else: display.vvvv(message, host=play_context.remote_addr) if 'error' in result: if play_context.verbosity > 2: if result.get('exception'): msg = "The full traceback is:\n" + result['exception'] display.display(msg, color=C.COLOR_ERROR) raise AnsibleError(result['error']) return result['socket_path']
closed
ansible/ansible
https://github.com/ansible/ansible
75,971
add_host module cannot handle a variable in a condition
### Summary When I define `failed_when:` condition in the task using `add_host`, it caused an error that the variable was not found. ### Issue Type Bug Report ### Component Name task ### Ansible Version ```console $ ansible --version ansible [core 2.11.2] config file = /etc/ansible/ansible.cfg configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.8/site-packages/ansible ansible collection location = /var/lib/awx/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)] jinja version = 2.10.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed $ (nothing) ``` ### OS / Environment $ cat /etc/redhat-release Red Hat Enterprise Linux release 8.4 (Ootpa) ### Steps to Reproduce ### pattern 1: using `add_host` module ``` --- - hosts: localhost gather_facts: false vars: test: 1 tasks: - add_host: name: "myhost" groups: "mygroup" ansible_host: "host46" ansible_ssh_host: "192.168.100.100" failed_when: test == 2 ``` When I ran the above playbook, it failed with error as below even the variable 'test' was defined in the play. ``` $ ansible-playbook -i localhost, conditional.yml PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [add_host] ********************************************************************************************************************************************************************************** ERROR! The conditional check 'test == 2' failed. The error was: error while evaluating conditional (test == 2): 'test' is undefined ``` If I passed the variable as an extra_vars and specified to be true, it worked as expected. ``` $ ansible-playbook -i localhost, conditional.yml -e "{test: 2}" PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [add_host] ********************************************************************************************************************************************************************************** fatal: [localhost]: FAILED! => {"add_host": {"groups": ["mygroup"], "host_name": "myhost", "host_vars": {"ansible_host": "host46", "ansible_ssh_host": "192.168.100.100"}}, "changed": false, "failed_when_result": true} NO MORE HOSTS LEFT ******************************************************************************************************************************************************************************* PLAY RECAP *************************************************************************************************************************************************************************************** localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` On the other hand when "{test: 1}", it failed. ``` $ ansible-playbook -i localhost, conditional.yml -e "{test: 1}" PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [add_host] ********************************************************************************************************************************************************************************** ERROR! The conditional check 'test == 2' failed. The error was: error while evaluating conditional (test == 2): 'test' is undefined ``` ### pattern 2: using `debug` module, it worked as desired ``` --- - hosts: localhost gather_facts: false vars: test: 1 tasks: - debug: msg: "debug message" failed_when: test == 2 ``` It works. ``` $ ansible-playbook -i localhost, debug.yml PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [debug] ************************************************************************************************************************************************************************************* ok: [localhost] => { "msg": "debug message" } PLAY RECAP *************************************************************************************************************************************************************************************** localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` When passed `2`, it failed as expected. ``` $ ansible-playbook -i localhost, debug.yml -e "{test: 2}" PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [debug] ************************************************************************************************************************************************************************************* fatal: [localhost]: FAILED! => { "msg": "debug message" } PLAY RECAP *************************************************************************************************************************************************************************************** localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ### Expected Results The `pattern 1` should work as the `pattern 2` from the point of view of handling the conditional. ### Actual Results ```console (already pasted above) ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/75971
https://github.com/ansible/ansible/pull/71719
2749d9fbf9242a59ed87f46ea057d84f4768a93e
394d216922d70709248a60f58da300f1e70f5894
2021-10-08T06:47:15Z
python
2022-02-04T11:35:23Z
lib/ansible/playbook/task.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type from ansible import constants as C from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleAssertionError from ansible.module_utils._text import to_native from ansible.module_utils.six import string_types from ansible.parsing.mod_args import ModuleArgsParser from ansible.parsing.yaml.objects import AnsibleBaseYAMLObject, AnsibleMapping from ansible.plugins.loader import lookup_loader from ansible.playbook.attribute import FieldAttribute from ansible.playbook.base import Base from ansible.playbook.block import Block from ansible.playbook.collectionsearch import CollectionSearch from ansible.playbook.conditional import Conditional from ansible.playbook.loop_control import LoopControl from ansible.playbook.role import Role from ansible.playbook.taggable import Taggable from ansible.utils.collection_loader import AnsibleCollectionConfig from ansible.utils.display import Display from ansible.utils.sentinel import Sentinel __all__ = ['Task'] display = Display() class Task(Base, Conditional, Taggable, CollectionSearch): """ A task is a language feature that represents a call to a module, with given arguments and other parameters. A handler is a subclass of a task. Usage: Task.load(datastructure) -> Task Task.something(...) """ # ================================================================================= # ATTRIBUTES # load_<attribute_name> and # validate_<attribute_name> # will be used if defined # might be possible to define others # NOTE: ONLY set defaults on task attributes that are not inheritable, # inheritance is only triggered if the 'current value' is None, # default can be set at play/top level object and inheritance will take it's course. _args = FieldAttribute(isa='dict', default=dict) _action = FieldAttribute(isa='string') _async_val = FieldAttribute(isa='int', default=0, alias='async') _changed_when = FieldAttribute(isa='list', default=list) _delay = FieldAttribute(isa='int', default=5) _delegate_to = FieldAttribute(isa='string') _delegate_facts = FieldAttribute(isa='bool') _failed_when = FieldAttribute(isa='list', default=list) _loop = FieldAttribute() _loop_control = FieldAttribute(isa='class', class_type=LoopControl, inherit=False) _notify = FieldAttribute(isa='list') _poll = FieldAttribute(isa='int', default=C.DEFAULT_POLL_INTERVAL) _register = FieldAttribute(isa='string', static=True) _retries = FieldAttribute(isa='int', default=3) _until = FieldAttribute(isa='list', default=list) # deprecated, used to be loop and loop_args but loop has been repurposed _loop_with = FieldAttribute(isa='string', private=True, inherit=False) def __init__(self, block=None, role=None, task_include=None): ''' constructors a task, without the Task.load classmethod, it will be pretty blank ''' self._role = role self._parent = None self.implicit = False self.resolved_action = None if task_include: self._parent = task_include else: self._parent = block super(Task, self).__init__() def get_name(self, include_role_fqcn=True): ''' return the name of the task ''' if self._role: role_name = self._role.get_name(include_role_fqcn=include_role_fqcn) if self._role and self.name: return "%s : %s" % (role_name, self.name) elif self.name: return self.name else: if self._role: return "%s : %s" % (role_name, self.action) else: return "%s" % (self.action,) def _merge_kv(self, ds): if ds is None: return "" elif isinstance(ds, string_types): return ds elif isinstance(ds, dict): buf = "" for (k, v) in ds.items(): if k.startswith('_'): continue buf = buf + "%s=%s " % (k, v) buf = buf.strip() return buf @staticmethod def load(data, block=None, role=None, task_include=None, variable_manager=None, loader=None): t = Task(block=block, role=role, task_include=task_include) return t.load_data(data, variable_manager=variable_manager, loader=loader) def __repr__(self): ''' returns a human readable representation of the task ''' if self.get_name() in C._ACTION_META: return "TASK: meta (%s)" % self.args['_raw_params'] else: return "TASK: %s" % self.get_name() def _preprocess_with_loop(self, ds, new_ds, k, v): ''' take a lookup plugin name and store it correctly ''' loop_name = k.replace("with_", "") if new_ds.get('loop') is not None or new_ds.get('loop_with') is not None: raise AnsibleError("duplicate loop in task: %s" % loop_name, obj=ds) if v is None: raise AnsibleError("you must specify a value when using %s" % k, obj=ds) new_ds['loop_with'] = loop_name new_ds['loop'] = v # display.deprecated("with_ type loops are being phased out, use the 'loop' keyword instead", # version="2.10", collection_name='ansible.builtin') def preprocess_data(self, ds): ''' tasks are especially complex arguments so need pre-processing. keep it short. ''' if not isinstance(ds, dict): raise AnsibleAssertionError('ds (%s) should be a dict but was a %s' % (ds, type(ds))) # the new, cleaned datastructure, which will have legacy # items reduced to a standard structure suitable for the # attributes of the task class new_ds = AnsibleMapping() if isinstance(ds, AnsibleBaseYAMLObject): new_ds.ansible_pos = ds.ansible_pos # since this affects the task action parsing, we have to resolve in preprocess instead of in typical validator default_collection = AnsibleCollectionConfig.default_collection collections_list = ds.get('collections') if collections_list is None: # use the parent value if our ds doesn't define it collections_list = self.collections else: # Validate this untemplated field early on to guarantee we are dealing with a list. # This is also done in CollectionSearch._load_collections() but this runs before that call. collections_list = self.get_validated_value('collections', self._collections, collections_list, None) if default_collection and not self._role: # FIXME: and not a collections role if collections_list: if default_collection not in collections_list: collections_list.insert(0, default_collection) else: collections_list = [default_collection] if collections_list and 'ansible.builtin' not in collections_list and 'ansible.legacy' not in collections_list: collections_list.append('ansible.legacy') if collections_list: ds['collections'] = collections_list # use the args parsing class to determine the action, args, # and the delegate_to value from the various possible forms # supported as legacy args_parser = ModuleArgsParser(task_ds=ds, collection_list=collections_list) try: (action, args, delegate_to) = args_parser.parse() except AnsibleParserError as e: # if the raises exception was created with obj=ds args, then it includes the detail # so we dont need to add it so we can just re raise. if e.obj: raise # But if it wasn't, we can add the yaml object now to get more detail raise AnsibleParserError(to_native(e), obj=ds, orig_exc=e) else: self.resolved_action = args_parser.resolved_action # the command/shell/script modules used to support the `cmd` arg, # which corresponds to what we now call _raw_params, so move that # value over to _raw_params (assuming it is empty) if action in C._ACTION_HAS_CMD: if 'cmd' in args: if args.get('_raw_params', '') != '': raise AnsibleError("The 'cmd' argument cannot be used when other raw parameters are specified." " Please put everything in one or the other place.", obj=ds) args['_raw_params'] = args.pop('cmd') new_ds['action'] = action new_ds['args'] = args new_ds['delegate_to'] = delegate_to # we handle any 'vars' specified in the ds here, as we may # be adding things to them below (special handling for includes). # When that deprecated feature is removed, this can be too. if 'vars' in ds: # _load_vars is defined in Base, and is used to load a dictionary # or list of dictionaries in a standard way new_ds['vars'] = self._load_vars(None, ds.get('vars')) else: new_ds['vars'] = dict() for (k, v) in ds.items(): if k in ('action', 'local_action', 'args', 'delegate_to') or k == action or k == 'shell': # we don't want to re-assign these values, which were determined by the ModuleArgsParser() above continue elif k.startswith('with_') and k.replace("with_", "") in lookup_loader: # transform into loop property self._preprocess_with_loop(ds, new_ds, k, v) elif C.INVALID_TASK_ATTRIBUTE_FAILED or k in self._valid_attrs: new_ds[k] = v else: display.warning("Ignoring invalid attribute: %s" % k) return super(Task, self).preprocess_data(new_ds) def _load_loop_control(self, attr, ds): if not isinstance(ds, dict): raise AnsibleParserError( "the `loop_control` value must be specified as a dictionary and cannot " "be a variable itself (though it can contain variables)", obj=ds, ) return LoopControl.load(data=ds, variable_manager=self._variable_manager, loader=self._loader) def _validate_attributes(self, ds): try: super(Task, self)._validate_attributes(ds) except AnsibleParserError as e: e.message += '\nThis error can be suppressed as a warning using the "invalid_task_attribute_failed" configuration' raise e def post_validate(self, templar): ''' Override of base class post_validate, to also do final validation on the block and task include (if any) to which this task belongs. ''' if self._parent: self._parent.post_validate(templar) if AnsibleCollectionConfig.default_collection: pass super(Task, self).post_validate(templar) def _post_validate_loop(self, attr, value, templar): ''' Override post validation for the loop field, which is templated specially in the TaskExecutor class when evaluating loops. ''' return value def _post_validate_environment(self, attr, value, templar): ''' Override post validation of vars on the play, as we don't want to template these too early. ''' env = {} if value is not None: def _parse_env_kv(k, v): try: env[k] = templar.template(v, convert_bare=False) except AnsibleUndefinedVariable as e: error = to_native(e) if self.action in C._ACTION_FACT_GATHERING and 'ansible_facts.env' in error or 'ansible_env' in error: # ignore as fact gathering is required for 'env' facts return raise if isinstance(value, list): for env_item in value: if isinstance(env_item, dict): for k in env_item: _parse_env_kv(k, env_item[k]) else: isdict = templar.template(env_item, convert_bare=False) if isinstance(isdict, dict): env.update(isdict) else: display.warning("could not parse environment value, skipping: %s" % value) elif isinstance(value, dict): # should not really happen env = dict() for env_item in value: _parse_env_kv(env_item, value[env_item]) else: # at this point it should be a simple string, also should not happen env = templar.template(value, convert_bare=False) return env def _post_validate_changed_when(self, attr, value, templar): ''' changed_when is evaluated after the execution of the task is complete, and should not be templated during the regular post_validate step. ''' return value def _post_validate_failed_when(self, attr, value, templar): ''' failed_when is evaluated after the execution of the task is complete, and should not be templated during the regular post_validate step. ''' return value def _post_validate_until(self, attr, value, templar): ''' until is evaluated after the execution of the task is complete, and should not be templated during the regular post_validate step. ''' return value def get_vars(self): all_vars = dict() if self._parent: all_vars.update(self._parent.get_vars()) all_vars.update(self.vars) if 'tags' in all_vars: del all_vars['tags'] if 'when' in all_vars: del all_vars['when'] return all_vars def get_include_params(self): all_vars = dict() if self._parent: all_vars.update(self._parent.get_include_params()) if self.action in C._ACTION_ALL_INCLUDES: all_vars.update(self.vars) return all_vars def copy(self, exclude_parent=False, exclude_tasks=False): new_me = super(Task, self).copy() new_me._parent = None if self._parent and not exclude_parent: new_me._parent = self._parent.copy(exclude_tasks=exclude_tasks) new_me._role = None if self._role: new_me._role = self._role new_me.implicit = self.implicit new_me.resolved_action = self.resolved_action return new_me def serialize(self): data = super(Task, self).serialize() if not self._squashed and not self._finalized: if self._parent: data['parent'] = self._parent.serialize() data['parent_type'] = self._parent.__class__.__name__ if self._role: data['role'] = self._role.serialize() data['implicit'] = self.implicit data['resolved_action'] = self.resolved_action return data def deserialize(self, data): # import is here to avoid import loops from ansible.playbook.task_include import TaskInclude from ansible.playbook.handler_task_include import HandlerTaskInclude parent_data = data.get('parent', None) if parent_data: parent_type = data.get('parent_type') if parent_type == 'Block': p = Block() elif parent_type == 'TaskInclude': p = TaskInclude() elif parent_type == 'HandlerTaskInclude': p = HandlerTaskInclude() p.deserialize(parent_data) self._parent = p del data['parent'] role_data = data.get('role') if role_data: r = Role() r.deserialize(role_data) self._role = r del data['role'] self.implicit = data.get('implicit', False) self.resolved_action = data.get('resolved_action') super(Task, self).deserialize(data) def set_loader(self, loader): ''' Sets the loader on this object and recursively on parent, child objects. This is used primarily after the Task has been serialized/deserialized, which does not preserve the loader. ''' self._loader = loader if self._parent: self._parent.set_loader(loader) def _get_parent_attribute(self, attr, extend=False, prepend=False): ''' Generic logic to get the attribute or parent attribute for a task value. ''' extend = self._valid_attrs[attr].extend prepend = self._valid_attrs[attr].prepend try: value = self._attributes[attr] # If parent is static, we can grab attrs from the parent # otherwise, defer to the grandparent if getattr(self._parent, 'statically_loaded', True): _parent = self._parent else: _parent = self._parent._parent if _parent and (value is Sentinel or extend): if getattr(_parent, 'statically_loaded', True): # vars are always inheritable, other attributes might not be for the parent but still should be for other ancestors if attr != 'vars' and hasattr(_parent, '_get_parent_attribute'): parent_value = _parent._get_parent_attribute(attr) else: parent_value = _parent._attributes.get(attr, Sentinel) if extend: value = self._extend_value(value, parent_value, prepend) else: value = parent_value except KeyError: pass return value def all_parents_static(self): if self._parent: return self._parent.all_parents_static() return True def get_first_parent_include(self): from ansible.playbook.task_include import TaskInclude if self._parent: if isinstance(self._parent, TaskInclude): return self._parent return self._parent.get_first_parent_include() return None
closed
ansible/ansible
https://github.com/ansible/ansible
75,971
add_host module cannot handle a variable in a condition
### Summary When I define `failed_when:` condition in the task using `add_host`, it caused an error that the variable was not found. ### Issue Type Bug Report ### Component Name task ### Ansible Version ```console $ ansible --version ansible [core 2.11.2] config file = /etc/ansible/ansible.cfg configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.8/site-packages/ansible ansible collection location = /var/lib/awx/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)] jinja version = 2.10.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed $ (nothing) ``` ### OS / Environment $ cat /etc/redhat-release Red Hat Enterprise Linux release 8.4 (Ootpa) ### Steps to Reproduce ### pattern 1: using `add_host` module ``` --- - hosts: localhost gather_facts: false vars: test: 1 tasks: - add_host: name: "myhost" groups: "mygroup" ansible_host: "host46" ansible_ssh_host: "192.168.100.100" failed_when: test == 2 ``` When I ran the above playbook, it failed with error as below even the variable 'test' was defined in the play. ``` $ ansible-playbook -i localhost, conditional.yml PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [add_host] ********************************************************************************************************************************************************************************** ERROR! The conditional check 'test == 2' failed. The error was: error while evaluating conditional (test == 2): 'test' is undefined ``` If I passed the variable as an extra_vars and specified to be true, it worked as expected. ``` $ ansible-playbook -i localhost, conditional.yml -e "{test: 2}" PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [add_host] ********************************************************************************************************************************************************************************** fatal: [localhost]: FAILED! => {"add_host": {"groups": ["mygroup"], "host_name": "myhost", "host_vars": {"ansible_host": "host46", "ansible_ssh_host": "192.168.100.100"}}, "changed": false, "failed_when_result": true} NO MORE HOSTS LEFT ******************************************************************************************************************************************************************************* PLAY RECAP *************************************************************************************************************************************************************************************** localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` On the other hand when "{test: 1}", it failed. ``` $ ansible-playbook -i localhost, conditional.yml -e "{test: 1}" PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [add_host] ********************************************************************************************************************************************************************************** ERROR! The conditional check 'test == 2' failed. The error was: error while evaluating conditional (test == 2): 'test' is undefined ``` ### pattern 2: using `debug` module, it worked as desired ``` --- - hosts: localhost gather_facts: false vars: test: 1 tasks: - debug: msg: "debug message" failed_when: test == 2 ``` It works. ``` $ ansible-playbook -i localhost, debug.yml PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [debug] ************************************************************************************************************************************************************************************* ok: [localhost] => { "msg": "debug message" } PLAY RECAP *************************************************************************************************************************************************************************************** localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` When passed `2`, it failed as expected. ``` $ ansible-playbook -i localhost, debug.yml -e "{test: 2}" PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [debug] ************************************************************************************************************************************************************************************* fatal: [localhost]: FAILED! => { "msg": "debug message" } PLAY RECAP *************************************************************************************************************************************************************************************** localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ### Expected Results The `pattern 1` should work as the `pattern 2` from the point of view of handling the conditional. ### Actual Results ```console (already pasted above) ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/75971
https://github.com/ansible/ansible/pull/71719
2749d9fbf9242a59ed87f46ea057d84f4768a93e
394d216922d70709248a60f58da300f1e70f5894
2021-10-08T06:47:15Z
python
2022-02-04T11:35:23Z
lib/ansible/plugins/strategy/__init__.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import cmd import functools import os import pprint import sys import threading import time from collections import deque from multiprocessing import Lock from queue import Queue from jinja2.exceptions import UndefinedError from ansible import constants as C from ansible import context from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleUndefinedVariable, AnsibleParserError from ansible.executor import action_write_locks from ansible.executor.play_iterator import IteratingStates, FailedStates from ansible.executor.process.worker import WorkerProcess from ansible.executor.task_result import TaskResult from ansible.executor.task_queue_manager import CallbackSend from ansible.module_utils.six import string_types from ansible.module_utils._text import to_text from ansible.module_utils.connection import Connection, ConnectionError from ansible.playbook.conditional import Conditional from ansible.playbook.handler import Handler from ansible.playbook.helpers import load_list_of_blocks from ansible.playbook.included_file import IncludedFile from ansible.playbook.task import Task from ansible.playbook.task_include import TaskInclude from ansible.plugins import loader as plugin_loader from ansible.template import Templar from ansible.utils.display import Display from ansible.utils.fqcn import add_internal_fqcns from ansible.utils.unsafe_proxy import wrap_var from ansible.utils.vars import combine_vars from ansible.vars.clean import strip_internal_keys, module_response_deepcopy display = Display() __all__ = ['StrategyBase'] # This list can be an exact match, or start of string bound # does not accept regex ALWAYS_DELEGATE_FACT_PREFIXES = frozenset(( 'discovered_interpreter_', )) class StrategySentinel: pass _sentinel = StrategySentinel() def post_process_whens(result, task, templar): cond = None if task.changed_when: cond = Conditional(loader=templar._loader) cond.when = task.changed_when result['changed'] = cond.evaluate_conditional(templar, templar.available_variables) if task.failed_when: if cond is None: cond = Conditional(loader=templar._loader) cond.when = task.failed_when failed_when_result = cond.evaluate_conditional(templar, templar.available_variables) result['failed_when_result'] = result['failed'] = failed_when_result def results_thread_main(strategy): while True: try: result = strategy._final_q.get() if isinstance(result, StrategySentinel): break elif isinstance(result, CallbackSend): for arg in result.args: if isinstance(arg, TaskResult): strategy.normalize_task_result(arg) break strategy._tqm.send_callback(result.method_name, *result.args, **result.kwargs) elif isinstance(result, TaskResult): strategy.normalize_task_result(result) with strategy._results_lock: # only handlers have the listen attr, so this must be a handler # we split up the results into two queues here to make sure # handler and regular result processing don't cross wires if 'listen' in result._task_fields: strategy._handler_results.append(result) else: strategy._results.append(result) else: display.warning('Received an invalid object (%s) in the result queue: %r' % (type(result), result)) except (IOError, EOFError): break except Queue.Empty: pass def debug_closure(func): """Closure to wrap ``StrategyBase._process_pending_results`` and invoke the task debugger""" @functools.wraps(func) def inner(self, iterator, one_pass=False, max_passes=None, do_handlers=False): status_to_stats_map = ( ('is_failed', 'failures'), ('is_unreachable', 'dark'), ('is_changed', 'changed'), ('is_skipped', 'skipped'), ) # We don't know the host yet, copy the previous states, for lookup after we process new results prev_host_states = iterator._host_states.copy() results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers) _processed_results = [] for result in results: task = result._task host = result._host _queued_task_args = self._queued_task_cache.pop((host.name, task._uuid), None) task_vars = _queued_task_args['task_vars'] play_context = _queued_task_args['play_context'] # Try to grab the previous host state, if it doesn't exist use get_host_state to generate an empty state try: prev_host_state = prev_host_states[host.name] except KeyError: prev_host_state = iterator.get_host_state(host) while result.needs_debugger(globally_enabled=self.debugger_active): next_action = NextAction() dbg = Debugger(task, host, task_vars, play_context, result, next_action) dbg.cmdloop() if next_action.result == NextAction.REDO: # rollback host state self._tqm.clear_failed_hosts() if task.run_once and iterator._play.strategy in add_internal_fqcns(('linear',)) and result.is_failed(): for host_name, state in prev_host_states.items(): if host_name == host.name: continue iterator.set_state_for_host(host_name, state) iterator._play._removed_hosts.remove(host_name) iterator.set_state_for_host(host.name, prev_host_state) for method, what in status_to_stats_map: if getattr(result, method)(): self._tqm._stats.decrement(what, host.name) self._tqm._stats.decrement('ok', host.name) # redo self._queue_task(host, task, task_vars, play_context) _processed_results.extend(debug_closure(func)(self, iterator, one_pass)) break elif next_action.result == NextAction.CONTINUE: _processed_results.append(result) break elif next_action.result == NextAction.EXIT: # Matches KeyboardInterrupt from bin/ansible sys.exit(99) else: _processed_results.append(result) return _processed_results return inner class StrategyBase: ''' This is the base class for strategy plugins, which contains some common code useful to all strategies like running handlers, cleanup actions, etc. ''' # by default, strategies should support throttling but we allow individual # strategies to disable this and either forego supporting it or managing # the throttling internally (as `free` does) ALLOW_BASE_THROTTLING = True def __init__(self, tqm): self._tqm = tqm self._inventory = tqm.get_inventory() self._workers = tqm._workers self._variable_manager = tqm.get_variable_manager() self._loader = tqm.get_loader() self._final_q = tqm._final_q self._step = context.CLIARGS.get('step', False) self._diff = context.CLIARGS.get('diff', False) # the task cache is a dictionary of tuples of (host.name, task._uuid) # used to find the original task object of in-flight tasks and to store # the task args/vars and play context info used to queue the task. self._queued_task_cache = {} # Backwards compat: self._display isn't really needed, just import the global display and use that. self._display = display # internal counters self._pending_results = 0 self._pending_handler_results = 0 self._cur_worker = 0 # this dictionary is used to keep track of hosts that have # outstanding tasks still in queue self._blocked_hosts = dict() # this dictionary is used to keep track of hosts that have # flushed handlers self._flushed_hosts = dict() self._results = deque() self._handler_results = deque() self._results_lock = threading.Condition(threading.Lock()) # create the result processing thread for reading results in the background self._results_thread = threading.Thread(target=results_thread_main, args=(self,)) self._results_thread.daemon = True self._results_thread.start() # holds the list of active (persistent) connections to be shutdown at # play completion self._active_connections = dict() # Caches for get_host calls, to avoid calling excessively # These values should be set at the top of the ``run`` method of each # strategy plugin. Use ``_set_hosts_cache`` to set these values self._hosts_cache = [] self._hosts_cache_all = [] self.debugger_active = C.ENABLE_TASK_DEBUGGER def _set_hosts_cache(self, play, refresh=True): """Responsible for setting _hosts_cache and _hosts_cache_all See comment in ``__init__`` for the purpose of these caches """ if not refresh and all((self._hosts_cache, self._hosts_cache_all)): return if not play.finalized and Templar(None).is_template(play.hosts): _pattern = 'all' else: _pattern = play.hosts or 'all' self._hosts_cache_all = [h.name for h in self._inventory.get_hosts(pattern=_pattern, ignore_restrictions=True)] self._hosts_cache = [h.name for h in self._inventory.get_hosts(play.hosts, order=play.order)] def cleanup(self): # close active persistent connections for sock in self._active_connections.values(): try: conn = Connection(sock) conn.reset() except ConnectionError as e: # most likely socket is already closed display.debug("got an error while closing persistent connection: %s" % e) self._final_q.put(_sentinel) self._results_thread.join() def run(self, iterator, play_context, result=0): # execute one more pass through the iterator without peeking, to # make sure that all of the hosts are advanced to their final task. # This should be safe, as everything should be IteratingStates.COMPLETE by # this point, though the strategy may not advance the hosts itself. for host in self._hosts_cache: if host not in self._tqm._unreachable_hosts: try: iterator.get_next_task_for_host(self._inventory.hosts[host]) except KeyError: iterator.get_next_task_for_host(self._inventory.get_host(host)) # save the failed/unreachable hosts, as the run_handlers() # method will clear that information during its execution failed_hosts = iterator.get_failed_hosts() unreachable_hosts = self._tqm._unreachable_hosts.keys() display.debug("running handlers") handler_result = self.run_handlers(iterator, play_context) if isinstance(handler_result, bool) and not handler_result: result |= self._tqm.RUN_ERROR elif not handler_result: result |= handler_result # now update with the hosts (if any) that failed or were # unreachable during the handler execution phase failed_hosts = set(failed_hosts).union(iterator.get_failed_hosts()) unreachable_hosts = set(unreachable_hosts).union(self._tqm._unreachable_hosts.keys()) # return the appropriate code, depending on the status hosts after the run if not isinstance(result, bool) and result != self._tqm.RUN_OK: return result elif len(unreachable_hosts) > 0: return self._tqm.RUN_UNREACHABLE_HOSTS elif len(failed_hosts) > 0: return self._tqm.RUN_FAILED_HOSTS else: return self._tqm.RUN_OK def get_hosts_remaining(self, play): self._set_hosts_cache(play, refresh=False) ignore = set(self._tqm._failed_hosts).union(self._tqm._unreachable_hosts) return [host for host in self._hosts_cache if host not in ignore] def get_failed_hosts(self, play): self._set_hosts_cache(play, refresh=False) return [host for host in self._hosts_cache if host in self._tqm._failed_hosts] def add_tqm_variables(self, vars, play): ''' Base class method to add extra variables/information to the list of task vars sent through the executor engine regarding the task queue manager state. ''' vars['ansible_current_hosts'] = self.get_hosts_remaining(play) vars['ansible_failed_hosts'] = self.get_failed_hosts(play) def _queue_task(self, host, task, task_vars, play_context): ''' handles queueing the task up to be sent to a worker ''' display.debug("entering _queue_task() for %s/%s" % (host.name, task.action)) # Add a write lock for tasks. # Maybe this should be added somewhere further up the call stack but # this is the earliest in the code where we have task (1) extracted # into its own variable and (2) there's only a single code path # leading to the module being run. This is called by three # functions: __init__.py::_do_handler_run(), linear.py::run(), and # free.py::run() so we'd have to add to all three to do it there. # The next common higher level is __init__.py::run() and that has # tasks inside of play_iterator so we'd have to extract them to do it # there. if task.action not in action_write_locks.action_write_locks: display.debug('Creating lock for %s' % task.action) action_write_locks.action_write_locks[task.action] = Lock() # create a templar and template things we need later for the queuing process templar = Templar(loader=self._loader, variables=task_vars) try: throttle = int(templar.template(task.throttle)) except Exception as e: raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e) # and then queue the new task try: # Determine the "rewind point" of the worker list. This means we start # iterating over the list of workers until the end of the list is found. # Normally, that is simply the length of the workers list (as determined # by the forks or serial setting), however a task/block/play may "throttle" # that limit down. rewind_point = len(self._workers) if throttle > 0 and self.ALLOW_BASE_THROTTLING: if task.run_once: display.debug("Ignoring 'throttle' as 'run_once' is also set for '%s'" % task.get_name()) else: if throttle <= rewind_point: display.debug("task: %s, throttle: %d" % (task.get_name(), throttle)) rewind_point = throttle queued = False starting_worker = self._cur_worker while True: if self._cur_worker >= rewind_point: self._cur_worker = 0 worker_prc = self._workers[self._cur_worker] if worker_prc is None or not worker_prc.is_alive(): self._queued_task_cache[(host.name, task._uuid)] = { 'host': host, 'task': task, 'task_vars': task_vars, 'play_context': play_context } worker_prc = WorkerProcess(self._final_q, task_vars, host, task, play_context, self._loader, self._variable_manager, plugin_loader) self._workers[self._cur_worker] = worker_prc self._tqm.send_callback('v2_runner_on_start', host, task) worker_prc.start() display.debug("worker is %d (out of %d available)" % (self._cur_worker + 1, len(self._workers))) queued = True self._cur_worker += 1 if self._cur_worker >= rewind_point: self._cur_worker = 0 if queued: break elif self._cur_worker == starting_worker: time.sleep(0.0001) if isinstance(task, Handler): self._pending_handler_results += 1 else: self._pending_results += 1 except (EOFError, IOError, AssertionError) as e: # most likely an abort display.debug("got an error while queuing: %s" % e) return display.debug("exiting _queue_task() for %s/%s" % (host.name, task.action)) def get_task_hosts(self, iterator, task_host, task): if task.run_once: host_list = [host for host in self._hosts_cache if host not in self._tqm._unreachable_hosts] else: host_list = [task_host.name] return host_list def get_delegated_hosts(self, result, task): host_name = result.get('_ansible_delegated_vars', {}).get('ansible_delegated_host', None) return [host_name or task.delegate_to] def _set_always_delegated_facts(self, result, task): """Sets host facts for ``delegate_to`` hosts for facts that should always be delegated This operation mutates ``result`` to remove the always delegated facts See ``ALWAYS_DELEGATE_FACT_PREFIXES`` """ if task.delegate_to is None: return facts = result['ansible_facts'] always_keys = set() _add = always_keys.add for fact_key in facts: for always_key in ALWAYS_DELEGATE_FACT_PREFIXES: if fact_key.startswith(always_key): _add(fact_key) if always_keys: _pop = facts.pop always_facts = { 'ansible_facts': dict((k, _pop(k)) for k in list(facts) if k in always_keys) } host_list = self.get_delegated_hosts(result, task) _set_host_facts = self._variable_manager.set_host_facts for target_host in host_list: _set_host_facts(target_host, always_facts) def normalize_task_result(self, task_result): """Normalize a TaskResult to reference actual Host and Task objects when only given the ``Host.name``, or the ``Task._uuid`` Only the ``Host.name`` and ``Task._uuid`` are commonly sent back from the ``TaskExecutor`` or ``WorkerProcess`` due to performance concerns Mutates the original object """ if isinstance(task_result._host, string_types): # If the value is a string, it is ``Host.name`` task_result._host = self._inventory.get_host(to_text(task_result._host)) if isinstance(task_result._task, string_types): # If the value is a string, it is ``Task._uuid`` queue_cache_entry = (task_result._host.name, task_result._task) try: found_task = self._queued_task_cache[queue_cache_entry]['task'] except KeyError: # This should only happen due to an implicit task created by the # TaskExecutor, restrict this behavior to the explicit use case # of an implicit async_status task if task_result._task_fields.get('action') != 'async_status': raise original_task = Task() else: original_task = found_task.copy(exclude_parent=True, exclude_tasks=True) original_task._parent = found_task._parent original_task.from_attrs(task_result._task_fields) task_result._task = original_task return task_result @debug_closure def _process_pending_results(self, iterator, one_pass=False, max_passes=None, do_handlers=False): ''' Reads results off the final queue and takes appropriate action based on the result (executing callbacks, updating state, etc.). ''' ret_results = [] handler_templar = Templar(self._loader) def search_handler_blocks_by_name(handler_name, handler_blocks): # iterate in reversed order since last handler loaded with the same name wins for handler_block in reversed(handler_blocks): for handler_task in handler_block.block: if handler_task.name: try: if not handler_task.cached_name: if handler_templar.is_template(handler_task.name): handler_templar.available_variables = self._variable_manager.get_vars(play=iterator._play, task=handler_task, _hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all) handler_task.name = handler_templar.template(handler_task.name) handler_task.cached_name = True # first we check with the full result of get_name(), which may # include the role name (if the handler is from a role). If that # is not found, we resort to the simple name field, which doesn't # have anything extra added to it. candidates = ( handler_task.name, handler_task.get_name(include_role_fqcn=False), handler_task.get_name(include_role_fqcn=True), ) if handler_name in candidates: return handler_task except (UndefinedError, AnsibleUndefinedVariable) as e: # We skip this handler due to the fact that it may be using # a variable in the name that was conditionally included via # set_fact or some other method, and we don't want to error # out unnecessarily if not handler_task.listen: display.warning( "Handler '%s' is unusable because it has no listen topics and " "the name could not be templated (host-specific variables are " "not supported in handler names). The error: %s" % (handler_task.name, to_text(e)) ) continue return None cur_pass = 0 while True: try: self._results_lock.acquire() if do_handlers: task_result = self._handler_results.popleft() else: task_result = self._results.popleft() except IndexError: break finally: self._results_lock.release() original_host = task_result._host original_task = task_result._task # all host status messages contain 2 entries: (msg, task_result) role_ran = False if task_result.is_failed(): role_ran = True ignore_errors = original_task.ignore_errors if not ignore_errors: display.debug("marking %s as failed" % original_host.name) if original_task.run_once: # if we're using run_once, we have to fail every host here for h in self._inventory.get_hosts(iterator._play.hosts): if h.name not in self._tqm._unreachable_hosts: iterator.mark_host_failed(h) else: iterator.mark_host_failed(original_host) # grab the current state and if we're iterating on the rescue portion # of a block then we save the failed task in a special var for use # within the rescue/always state, _ = iterator.get_next_task_for_host(original_host, peek=True) if iterator.is_failed(original_host) and state and state.run_state == IteratingStates.COMPLETE: self._tqm._failed_hosts[original_host.name] = True # Use of get_active_state() here helps detect proper state if, say, we are in a rescue # block from an included file (include_tasks). In a non-included rescue case, a rescue # that starts with a new 'block' will have an active state of IteratingStates.TASKS, so we also # check the current state block tree to see if any blocks are rescuing. if state and (iterator.get_active_state(state).run_state == IteratingStates.RESCUE or iterator.is_any_block_rescuing(state)): self._tqm._stats.increment('rescued', original_host.name) self._variable_manager.set_nonpersistent_facts( original_host.name, dict( ansible_failed_task=wrap_var(original_task.serialize()), ansible_failed_result=task_result._result, ), ) else: self._tqm._stats.increment('failures', original_host.name) else: self._tqm._stats.increment('ok', original_host.name) self._tqm._stats.increment('ignored', original_host.name) if 'changed' in task_result._result and task_result._result['changed']: self._tqm._stats.increment('changed', original_host.name) self._tqm.send_callback('v2_runner_on_failed', task_result, ignore_errors=ignore_errors) elif task_result.is_unreachable(): ignore_unreachable = original_task.ignore_unreachable if not ignore_unreachable: self._tqm._unreachable_hosts[original_host.name] = True iterator._play._removed_hosts.append(original_host.name) else: self._tqm._stats.increment('skipped', original_host.name) task_result._result['skip_reason'] = 'Host %s is unreachable' % original_host.name self._tqm._stats.increment('dark', original_host.name) self._tqm.send_callback('v2_runner_on_unreachable', task_result) elif task_result.is_skipped(): self._tqm._stats.increment('skipped', original_host.name) self._tqm.send_callback('v2_runner_on_skipped', task_result) else: role_ran = True if original_task.loop: # this task had a loop, and has more than one result, so # loop over all of them instead of a single result result_items = task_result._result.get('results', []) else: result_items = [task_result._result] for result_item in result_items: if '_ansible_notify' in result_item: if task_result.is_changed(): # The shared dictionary for notified handlers is a proxy, which # does not detect when sub-objects within the proxy are modified. # So, per the docs, we reassign the list so the proxy picks up and # notifies all other threads for handler_name in result_item['_ansible_notify']: found = False # Find the handler using the above helper. First we look up the # dependency chain of the current task (if it's from a role), otherwise # we just look through the list of handlers in the current play/all # roles and use the first one that matches the notify name target_handler = search_handler_blocks_by_name(handler_name, iterator._play.handlers) if target_handler is not None: found = True if target_handler.notify_host(original_host): self._tqm.send_callback('v2_playbook_on_notify', target_handler, original_host) for listening_handler_block in iterator._play.handlers: for listening_handler in listening_handler_block.block: listeners = getattr(listening_handler, 'listen', []) or [] if not listeners: continue listeners = listening_handler.get_validated_value( 'listen', listening_handler._valid_attrs['listen'], listeners, handler_templar ) if handler_name not in listeners: continue else: found = True if listening_handler.notify_host(original_host): self._tqm.send_callback('v2_playbook_on_notify', listening_handler, original_host) # and if none were found, then we raise an error if not found: msg = ("The requested handler '%s' was not found in either the main handlers list nor in the listening " "handlers list" % handler_name) if C.ERROR_ON_MISSING_HANDLER: raise AnsibleError(msg) else: display.warning(msg) if 'add_host' in result_item: # this task added a new host (add_host module) new_host_info = result_item.get('add_host', dict()) self._add_host(new_host_info, result_item) post_process_whens(result_item, original_task, handler_templar) elif 'add_group' in result_item: # this task added a new group (group_by module) self._add_group(original_host, result_item) post_process_whens(result_item, original_task, handler_templar) if 'ansible_facts' in result_item and original_task.action not in C._ACTION_DEBUG: # if delegated fact and we are delegating facts, we need to change target host for them if original_task.delegate_to is not None and original_task.delegate_facts: host_list = self.get_delegated_hosts(result_item, original_task) else: # Set facts that should always be on the delegated hosts self._set_always_delegated_facts(result_item, original_task) host_list = self.get_task_hosts(iterator, original_host, original_task) if original_task.action in C._ACTION_INCLUDE_VARS: for (var_name, var_value) in result_item['ansible_facts'].items(): # find the host we're actually referring too here, which may # be a host that is not really in inventory at all for target_host in host_list: self._variable_manager.set_host_variable(target_host, var_name, var_value) else: cacheable = result_item.pop('_ansible_facts_cacheable', False) for target_host in host_list: # so set_fact is a misnomer but 'cacheable = true' was meant to create an 'actual fact' # to avoid issues with precedence and confusion with set_fact normal operation, # we set BOTH fact and nonpersistent_facts (aka hostvar) # when fact is retrieved from cache in subsequent operations it will have the lower precedence, # but for playbook setting it the 'higher' precedence is kept is_set_fact = original_task.action in C._ACTION_SET_FACT if not is_set_fact or cacheable: self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy()) if is_set_fact: self._variable_manager.set_nonpersistent_facts(target_host, result_item['ansible_facts'].copy()) if 'ansible_stats' in result_item and 'data' in result_item['ansible_stats'] and result_item['ansible_stats']['data']: if 'per_host' not in result_item['ansible_stats'] or result_item['ansible_stats']['per_host']: host_list = self.get_task_hosts(iterator, original_host, original_task) else: host_list = [None] data = result_item['ansible_stats']['data'] aggregate = 'aggregate' in result_item['ansible_stats'] and result_item['ansible_stats']['aggregate'] for myhost in host_list: for k in data.keys(): if aggregate: self._tqm._stats.update_custom_stats(k, data[k], myhost) else: self._tqm._stats.set_custom_stats(k, data[k], myhost) if 'diff' in task_result._result: if self._diff or getattr(original_task, 'diff', False): self._tqm.send_callback('v2_on_file_diff', task_result) if not isinstance(original_task, TaskInclude): self._tqm._stats.increment('ok', original_host.name) if 'changed' in task_result._result and task_result._result['changed']: self._tqm._stats.increment('changed', original_host.name) # finally, send the ok for this task self._tqm.send_callback('v2_runner_on_ok', task_result) # register final results if original_task.register: host_list = self.get_task_hosts(iterator, original_host, original_task) clean_copy = strip_internal_keys(module_response_deepcopy(task_result._result)) if 'invocation' in clean_copy: del clean_copy['invocation'] for target_host in host_list: self._variable_manager.set_nonpersistent_facts(target_host, {original_task.register: clean_copy}) if do_handlers: self._pending_handler_results -= 1 else: self._pending_results -= 1 if original_host.name in self._blocked_hosts: del self._blocked_hosts[original_host.name] # If this is a role task, mark the parent role as being run (if # the task was ok or failed, but not skipped or unreachable) if original_task._role is not None and role_ran: # TODO: and original_task.action not in C._ACTION_INCLUDE_ROLE:? # lookup the role in the ROLE_CACHE to make sure we're dealing # with the correct object and mark it as executed for (entry, role_obj) in iterator._play.ROLE_CACHE[original_task._role.get_name()].items(): if role_obj._uuid == original_task._role._uuid: role_obj._had_task_run[original_host.name] = True ret_results.append(task_result) if one_pass or max_passes is not None and (cur_pass + 1) >= max_passes: break cur_pass += 1 return ret_results def _wait_on_handler_results(self, iterator, handler, notified_hosts): ''' Wait for the handler tasks to complete, using a short sleep between checks to ensure we don't spin lock ''' ret_results = [] handler_results = 0 display.debug("waiting for handler results...") while (self._pending_handler_results > 0 and handler_results < len(notified_hosts) and not self._tqm._terminated): if self._tqm.has_dead_workers(): raise AnsibleError("A worker was found in a dead state") results = self._process_pending_results(iterator, do_handlers=True) ret_results.extend(results) handler_results += len([ r._host for r in results if r._host in notified_hosts and r.task_name == handler.name]) if self._pending_handler_results > 0: time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL) display.debug("no more pending handlers, returning what we have") return ret_results def _wait_on_pending_results(self, iterator): ''' Wait for the shared counter to drop to zero, using a short sleep between checks to ensure we don't spin lock ''' ret_results = [] display.debug("waiting for pending results...") while self._pending_results > 0 and not self._tqm._terminated: if self._tqm.has_dead_workers(): raise AnsibleError("A worker was found in a dead state") results = self._process_pending_results(iterator) ret_results.extend(results) if self._pending_results > 0: time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL) display.debug("no more pending results, returning what we have") return ret_results def _add_host(self, host_info, result_item): ''' Helper function to add a new host to inventory based on a task result. ''' changed = False if host_info: host_name = host_info.get('host_name') # Check if host in inventory, add if not if host_name not in self._inventory.hosts: self._inventory.add_host(host_name, 'all') self._hosts_cache_all.append(host_name) changed = True new_host = self._inventory.hosts.get(host_name) # Set/update the vars for this host new_host_vars = new_host.get_vars() new_host_combined_vars = combine_vars(new_host_vars, host_info.get('host_vars', dict())) if new_host_vars != new_host_combined_vars: new_host.vars = new_host_combined_vars changed = True new_groups = host_info.get('groups', []) for group_name in new_groups: if group_name not in self._inventory.groups: group_name = self._inventory.add_group(group_name) changed = True new_group = self._inventory.groups[group_name] if new_group.add_host(self._inventory.hosts[host_name]): changed = True # reconcile inventory, ensures inventory rules are followed if changed: self._inventory.reconcile_inventory() result_item['changed'] = changed def _add_group(self, host, result_item): ''' Helper function to add a group (if it does not exist), and to assign the specified host to that group. ''' changed = False # the host here is from the executor side, which means it was a # serialized/cloned copy and we'll need to look up the proper # host object from the master inventory real_host = self._inventory.hosts.get(host.name) if real_host is None: if host.name == self._inventory.localhost.name: real_host = self._inventory.localhost else: raise AnsibleError('%s cannot be matched in inventory' % host.name) group_name = result_item.get('add_group') parent_group_names = result_item.get('parent_groups', []) if group_name not in self._inventory.groups: group_name = self._inventory.add_group(group_name) for name in parent_group_names: if name not in self._inventory.groups: # create the new group and add it to inventory self._inventory.add_group(name) changed = True group = self._inventory.groups[group_name] for parent_group_name in parent_group_names: parent_group = self._inventory.groups[parent_group_name] new = parent_group.add_child_group(group) if new and not changed: changed = True if real_host not in group.get_hosts(): changed = group.add_host(real_host) if group not in real_host.get_groups(): changed = real_host.add_group(group) if changed: self._inventory.reconcile_inventory() result_item['changed'] = changed def _copy_included_file(self, included_file): ''' A proven safe and performant way to create a copy of an included file ''' ti_copy = included_file._task.copy(exclude_parent=True) ti_copy._parent = included_file._task._parent temp_vars = ti_copy.vars.copy() temp_vars.update(included_file._vars) ti_copy.vars = temp_vars return ti_copy def _load_included_file(self, included_file, iterator, is_handler=False): ''' Loads an included YAML file of tasks, applying the optional set of variables. ''' display.debug("loading included file: %s" % included_file._filename) try: data = self._loader.load_from_file(included_file._filename) if data is None: return [] elif not isinstance(data, list): raise AnsibleError("included task files must contain a list of tasks") ti_copy = self._copy_included_file(included_file) block_list = load_list_of_blocks( data, play=iterator._play, parent_block=ti_copy.build_parent_block(), role=included_file._task._role, use_handlers=is_handler, loader=self._loader, variable_manager=self._variable_manager, ) # since we skip incrementing the stats when the task result is # first processed, we do so now for each host in the list for host in included_file._hosts: self._tqm._stats.increment('ok', host.name) except AnsibleParserError: raise except AnsibleError as e: if isinstance(e, AnsibleFileNotFound): reason = "Could not find or access '%s' on the Ansible Controller." % to_text(e.file_name) else: reason = to_text(e) for r in included_file._results: r._result['failed'] = True # mark all of the hosts including this file as failed, send callbacks, # and increment the stats for this host for host in included_file._hosts: tr = TaskResult(host=host, task=included_file._task, return_data=dict(failed=True, reason=reason)) iterator.mark_host_failed(host) self._tqm._failed_hosts[host.name] = True self._tqm._stats.increment('failures', host.name) self._tqm.send_callback('v2_runner_on_failed', tr) return [] # finally, send the callback and return the list of blocks loaded self._tqm.send_callback('v2_playbook_on_include', included_file) display.debug("done processing included file") return block_list def run_handlers(self, iterator, play_context): ''' Runs handlers on those hosts which have been notified. ''' result = self._tqm.RUN_OK for handler_block in iterator._play.handlers: # FIXME: handlers need to support the rescue/always portions of blocks too, # but this may take some work in the iterator and gets tricky when # we consider the ability of meta tasks to flush handlers for handler in handler_block.block: if handler.notified_hosts: result = self._do_handler_run(handler, handler.get_name(), iterator=iterator, play_context=play_context) if not result: break return result def _do_handler_run(self, handler, handler_name, iterator, play_context, notified_hosts=None): # FIXME: need to use iterator.get_failed_hosts() instead? # if not len(self.get_hosts_remaining(iterator._play)): # self._tqm.send_callback('v2_playbook_on_no_hosts_remaining') # result = False # break if notified_hosts is None: notified_hosts = handler.notified_hosts[:] # strategy plugins that filter hosts need access to the iterator to identify failed hosts failed_hosts = self._filter_notified_failed_hosts(iterator, notified_hosts) notified_hosts = self._filter_notified_hosts(notified_hosts) notified_hosts += failed_hosts if len(notified_hosts) > 0: self._tqm.send_callback('v2_playbook_on_handler_task_start', handler) bypass_host_loop = False try: action = plugin_loader.action_loader.get(handler.action, class_only=True, collection_list=handler.collections) if getattr(action, 'BYPASS_HOST_LOOP', False): bypass_host_loop = True except KeyError: # we don't care here, because the action may simply not have a # corresponding action plugin pass host_results = [] for host in notified_hosts: if not iterator.is_failed(host) or iterator._play.force_handlers: task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=handler, _hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all) self.add_tqm_variables(task_vars, play=iterator._play) templar = Templar(loader=self._loader, variables=task_vars) if not handler.cached_name: handler.name = templar.template(handler.name) handler.cached_name = True self._queue_task(host, handler, task_vars, play_context) if templar.template(handler.run_once) or bypass_host_loop: break # collect the results from the handler run host_results = self._wait_on_handler_results(iterator, handler, notified_hosts) included_files = IncludedFile.process_include_results( host_results, iterator=iterator, loader=self._loader, variable_manager=self._variable_manager ) result = True if len(included_files) > 0: for included_file in included_files: try: new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=True) # for every task in each block brought in by the include, add the list # of hosts which included the file to the notified_handlers dict for block in new_blocks: iterator._play.handlers.append(block) for task in block.block: task_name = task.get_name() display.debug("adding task '%s' included in handler '%s'" % (task_name, handler_name)) task.notified_hosts = included_file._hosts[:] result = self._do_handler_run( handler=task, handler_name=task_name, iterator=iterator, play_context=play_context, notified_hosts=included_file._hosts[:], ) if not result: break except AnsibleParserError: raise except AnsibleError as e: for host in included_file._hosts: iterator.mark_host_failed(host) self._tqm._failed_hosts[host.name] = True display.warning(to_text(e)) continue # remove hosts from notification list handler.notified_hosts = [ h for h in handler.notified_hosts if h not in notified_hosts] display.debug("done running handlers, result is: %s" % result) return result def _filter_notified_failed_hosts(self, iterator, notified_hosts): return [] def _filter_notified_hosts(self, notified_hosts): ''' Filter notified hosts accordingly to strategy ''' # As main strategy is linear, we do not filter hosts # We return a copy to avoid race conditions return notified_hosts[:] def _take_step(self, task, host=None): ret = False msg = u'Perform task: %s ' % task if host: msg += u'on %s ' % host msg += u'(N)o/(y)es/(c)ontinue: ' resp = display.prompt(msg) if resp.lower() in ['y', 'yes']: display.debug("User ran task") ret = True elif resp.lower() in ['c', 'continue']: display.debug("User ran task and canceled step mode") self._step = False ret = True else: display.debug("User skipped task") display.banner(msg) return ret def _cond_not_supported_warn(self, task_name): display.warning("%s task does not support when conditional" % task_name) def _execute_meta(self, task, play_context, iterator, target_host): # meta tasks store their args in the _raw_params field of args, # since they do not use k=v pairs, so get that meta_action = task.args.get('_raw_params') def _evaluate_conditional(h): all_vars = self._variable_manager.get_vars(play=iterator._play, host=h, task=task, _hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all) templar = Templar(loader=self._loader, variables=all_vars) return task.evaluate_conditional(templar, all_vars) skipped = False msg = '' skip_reason = '%s conditional evaluated to False' % meta_action self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False) # These don't support "when" conditionals if meta_action in ('noop', 'flush_handlers', 'refresh_inventory', 'reset_connection') and task.when: self._cond_not_supported_warn(meta_action) if meta_action == 'noop': msg = "noop" elif meta_action == 'flush_handlers': self._flushed_hosts[target_host] = True self.run_handlers(iterator, play_context) self._flushed_hosts[target_host] = False msg = "ran handlers" elif meta_action == 'refresh_inventory': self._inventory.refresh_inventory() self._set_hosts_cache(iterator._play) msg = "inventory successfully refreshed" elif meta_action == 'clear_facts': if _evaluate_conditional(target_host): for host in self._inventory.get_hosts(iterator._play.hosts): hostname = host.get_name() self._variable_manager.clear_facts(hostname) msg = "facts cleared" else: skipped = True skip_reason += ', not clearing facts and fact cache for %s' % target_host.name elif meta_action == 'clear_host_errors': if _evaluate_conditional(target_host): for host in self._inventory.get_hosts(iterator._play.hosts): self._tqm._failed_hosts.pop(host.name, False) self._tqm._unreachable_hosts.pop(host.name, False) iterator.set_fail_state_for_host(host.name, FailedStates.NONE) msg = "cleared host errors" else: skipped = True skip_reason += ', not clearing host error state for %s' % target_host.name elif meta_action == 'end_batch': if _evaluate_conditional(target_host): for host in self._inventory.get_hosts(iterator._play.hosts): if host.name not in self._tqm._unreachable_hosts: iterator.set_run_state_for_host(host.name, IteratingStates.COMPLETE) msg = "ending batch" else: skipped = True skip_reason += ', continuing current batch' elif meta_action == 'end_play': if _evaluate_conditional(target_host): for host in self._inventory.get_hosts(iterator._play.hosts): if host.name not in self._tqm._unreachable_hosts: iterator.set_run_state_for_host(host.name, IteratingStates.COMPLETE) # end_play is used in PlaybookExecutor/TQM to indicate that # the whole play is supposed to be ended as opposed to just a batch iterator.end_play = True msg = "ending play" else: skipped = True skip_reason += ', continuing play' elif meta_action == 'end_host': if _evaluate_conditional(target_host): iterator.set_run_state_for_host(target_host.name, IteratingStates.COMPLETE) iterator._play._removed_hosts.append(target_host.name) msg = "ending play for %s" % target_host.name else: skipped = True skip_reason += ", continuing execution for %s" % target_host.name # TODO: Nix msg here? Left for historical reasons, but skip_reason exists now. msg = "end_host conditional evaluated to false, continuing execution for %s" % target_host.name elif meta_action == 'role_complete': # Allow users to use this in a play as reported in https://github.com/ansible/ansible/issues/22286? # How would this work with allow_duplicates?? if task.implicit: if target_host.name in task._role._had_task_run: task._role._completed[target_host.name] = True msg = 'role_complete for %s' % target_host.name elif meta_action == 'reset_connection': all_vars = self._variable_manager.get_vars(play=iterator._play, host=target_host, task=task, _hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all) templar = Templar(loader=self._loader, variables=all_vars) # apply the given task's information to the connection info, # which may override some fields already set by the play or # the options specified on the command line play_context = play_context.set_task_and_variable_override(task=task, variables=all_vars, templar=templar) # fields set from the play/task may be based on variables, so we have to # do the same kind of post validation step on it here before we use it. play_context.post_validate(templar=templar) # now that the play context is finalized, if the remote_addr is not set # default to using the host's address field as the remote address if not play_context.remote_addr: play_context.remote_addr = target_host.address # We also add "magic" variables back into the variables dict to make sure # a certain subset of variables exist. play_context.update_vars(all_vars) if target_host in self._active_connections: connection = Connection(self._active_connections[target_host]) del self._active_connections[target_host] else: connection = plugin_loader.connection_loader.get(play_context.connection, play_context, os.devnull) connection.set_options(task_keys=task.dump_attrs(), var_options=all_vars) play_context.set_attributes_from_plugin(connection) if connection: try: connection.reset() msg = 'reset connection' except ConnectionError as e: # most likely socket is already closed display.debug("got an error while closing persistent connection: %s" % e) else: msg = 'no connection, nothing to reset' else: raise AnsibleError("invalid meta action requested: %s" % meta_action, obj=task._ds) result = {'msg': msg} if skipped: result['skipped'] = True result['skip_reason'] = skip_reason else: result['changed'] = False display.vv("META: %s" % msg) res = TaskResult(target_host, task, result) if skipped: self._tqm.send_callback('v2_runner_on_skipped', res) return [res] def get_hosts_left(self, iterator): ''' returns list of available hosts for this iterator by filtering out unreachables ''' hosts_left = [] for host in self._hosts_cache: if host not in self._tqm._unreachable_hosts: try: hosts_left.append(self._inventory.hosts[host]) except KeyError: hosts_left.append(self._inventory.get_host(host)) return hosts_left def update_active_connections(self, results): ''' updates the current active persistent connections ''' for r in results: if 'args' in r._task_fields: socket_path = r._task_fields['args'].get('_ansible_socket') if socket_path: if r._host not in self._active_connections: self._active_connections[r._host] = socket_path class NextAction(object): """ The next action after an interpreter's exit. """ REDO = 1 CONTINUE = 2 EXIT = 3 def __init__(self, result=EXIT): self.result = result class Debugger(cmd.Cmd): prompt_continuous = '> ' # multiple lines def __init__(self, task, host, task_vars, play_context, result, next_action): # cmd.Cmd is old-style class cmd.Cmd.__init__(self) self.prompt = '[%s] %s (debug)> ' % (host, task) self.intro = None self.scope = {} self.scope['task'] = task self.scope['task_vars'] = task_vars self.scope['host'] = host self.scope['play_context'] = play_context self.scope['result'] = result self.next_action = next_action def cmdloop(self): try: cmd.Cmd.cmdloop(self) except KeyboardInterrupt: pass do_h = cmd.Cmd.do_help def do_EOF(self, args): """Quit""" return self.do_quit(args) def do_quit(self, args): """Quit""" display.display('User interrupted execution') self.next_action.result = NextAction.EXIT return True do_q = do_quit def do_continue(self, args): """Continue to next result""" self.next_action.result = NextAction.CONTINUE return True do_c = do_continue def do_redo(self, args): """Schedule task for re-execution. The re-execution may not be the next result""" self.next_action.result = NextAction.REDO return True do_r = do_redo def do_update_task(self, args): """Recreate the task from ``task._ds``, and template with updated ``task_vars``""" templar = Templar(None, variables=self.scope['task_vars']) task = self.scope['task'] task = task.load_data(task._ds) task.post_validate(templar) self.scope['task'] = task do_u = do_update_task def evaluate(self, args): try: return eval(args, globals(), self.scope) except Exception: t, v = sys.exc_info()[:2] if isinstance(t, str): exc_type_name = t else: exc_type_name = t.__name__ display.display('***%s:%s' % (exc_type_name, repr(v))) raise def do_pprint(self, args): """Pretty Print""" try: result = self.evaluate(args) display.display(pprint.pformat(result)) except Exception: pass do_p = do_pprint def execute(self, args): try: code = compile(args + '\n', '<stdin>', 'single') exec(code, globals(), self.scope) except Exception: t, v = sys.exc_info()[:2] if isinstance(t, str): exc_type_name = t else: exc_type_name = t.__name__ display.display('***%s:%s' % (exc_type_name, repr(v))) raise def default(self, line): try: self.execute(line) except Exception: pass
closed
ansible/ansible
https://github.com/ansible/ansible
75,971
add_host module cannot handle a variable in a condition
### Summary When I define `failed_when:` condition in the task using `add_host`, it caused an error that the variable was not found. ### Issue Type Bug Report ### Component Name task ### Ansible Version ```console $ ansible --version ansible [core 2.11.2] config file = /etc/ansible/ansible.cfg configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.8/site-packages/ansible ansible collection location = /var/lib/awx/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)] jinja version = 2.10.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed $ (nothing) ``` ### OS / Environment $ cat /etc/redhat-release Red Hat Enterprise Linux release 8.4 (Ootpa) ### Steps to Reproduce ### pattern 1: using `add_host` module ``` --- - hosts: localhost gather_facts: false vars: test: 1 tasks: - add_host: name: "myhost" groups: "mygroup" ansible_host: "host46" ansible_ssh_host: "192.168.100.100" failed_when: test == 2 ``` When I ran the above playbook, it failed with error as below even the variable 'test' was defined in the play. ``` $ ansible-playbook -i localhost, conditional.yml PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [add_host] ********************************************************************************************************************************************************************************** ERROR! The conditional check 'test == 2' failed. The error was: error while evaluating conditional (test == 2): 'test' is undefined ``` If I passed the variable as an extra_vars and specified to be true, it worked as expected. ``` $ ansible-playbook -i localhost, conditional.yml -e "{test: 2}" PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [add_host] ********************************************************************************************************************************************************************************** fatal: [localhost]: FAILED! => {"add_host": {"groups": ["mygroup"], "host_name": "myhost", "host_vars": {"ansible_host": "host46", "ansible_ssh_host": "192.168.100.100"}}, "changed": false, "failed_when_result": true} NO MORE HOSTS LEFT ******************************************************************************************************************************************************************************* PLAY RECAP *************************************************************************************************************************************************************************************** localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` On the other hand when "{test: 1}", it failed. ``` $ ansible-playbook -i localhost, conditional.yml -e "{test: 1}" PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [add_host] ********************************************************************************************************************************************************************************** ERROR! The conditional check 'test == 2' failed. The error was: error while evaluating conditional (test == 2): 'test' is undefined ``` ### pattern 2: using `debug` module, it worked as desired ``` --- - hosts: localhost gather_facts: false vars: test: 1 tasks: - debug: msg: "debug message" failed_when: test == 2 ``` It works. ``` $ ansible-playbook -i localhost, debug.yml PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [debug] ************************************************************************************************************************************************************************************* ok: [localhost] => { "msg": "debug message" } PLAY RECAP *************************************************************************************************************************************************************************************** localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` When passed `2`, it failed as expected. ``` $ ansible-playbook -i localhost, debug.yml -e "{test: 2}" PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [debug] ************************************************************************************************************************************************************************************* fatal: [localhost]: FAILED! => { "msg": "debug message" } PLAY RECAP *************************************************************************************************************************************************************************************** localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ### Expected Results The `pattern 1` should work as the `pattern 2` from the point of view of handling the conditional. ### Actual Results ```console (already pasted above) ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/75971
https://github.com/ansible/ansible/pull/71719
2749d9fbf9242a59ed87f46ea057d84f4768a93e
394d216922d70709248a60f58da300f1e70f5894
2021-10-08T06:47:15Z
python
2022-02-04T11:35:23Z
test/integration/targets/add_host/tasks/main.yml
# test code for the add_host action # (c) 2015, Matt Davis <[email protected]> # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # See https://github.com/ansible/ansible/issues/36045 - set_fact: inventory_data: ansible_ssh_common_args: "-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" # ansible_ssh_host: "127.0.0.3" ansible_host: "127.0.0.3" ansible_ssh_pass: "foobar" # ansible_ssh_port: "2222" ansible_port: "2222" ansible_ssh_private_key_file: "/tmp/inventory-cloudj9cGz5/identity" ansible_ssh_user: "root" hostname: "newdynamichost2" - name: Show inventory_data for 36045 debug: msg: "{{ inventory_data }}" - name: Add host from dict 36045 add_host: "{{ inventory_data }}" - name: show newly added host debug: msg: "{{hostvars['newdynamichost2'].group_names}}" - name: ensure that dynamically-added newdynamichost2 is visible via hostvars, groups 36045 assert: that: - hostvars['newdynamichost2'] is defined - hostvars['newdynamichost2'].group_names is defined # end of https://github.com/ansible/ansible/issues/36045 related tests - name: add a host to the runtime inventory add_host: name: newdynamichost groups: newdynamicgroup a_var: from add_host - debug: msg={{hostvars['newdynamichost'].group_names}} - name: ensure that dynamically-added host is visible via hostvars, groups, etc (there are several caches that could break this) assert: that: - hostvars['bogushost'] is not defined # there was a bug where an undefined host was a "type" instead of an instance- ensure this works before we rely on it - hostvars['newdynamichost'] is defined - hostvars['newdynamichost'].group_names is defined - "'newdynamicgroup' in hostvars['newdynamichost'].group_names" - hostvars['newdynamichost']['bogusvar'] is not defined - hostvars['newdynamichost']['a_var'] is defined - hostvars['newdynamichost']['a_var'] == 'from add_host' - groups['bogusgroup'] is not defined # same check as above to ensure that bogus groups are undefined... - groups['newdynamicgroup'] is defined - "'newdynamichost' in groups['newdynamicgroup']" # Tests for idempotency - name: Add testhost01 dynamic host add_host: name: testhost01 register: add_testhost01 - name: Try adding testhost01 again, with no changes add_host: name: testhost01 register: add_testhost01_idem - name: Add a host variable to testhost01 add_host: name: testhost01 foo: bar register: hostvar_testhost01 - name: Add the same host variable to testhost01, with no changes add_host: name: testhost01 foo: bar register: hostvar_testhost01_idem - name: Add another host, testhost02 add_host: name: testhost02 register: add_testhost02 - name: Add it again for good measure add_host: name: testhost02 register: add_testhost02_idem - name: Add testhost02 to a group add_host: name: testhost02 groups: - testhostgroup register: add_group_testhost02 - name: Add testhost01 to the same group add_host: name: testhost01 groups: - testhostgroup register: add_group_testhost01 - name: Add testhost02 to the group again add_host: name: testhost02 groups: - testhostgroup register: add_group_testhost02_idem - name: Add testhost01 to the group again add_host: name: testhost01 groups: - testhostgroup register: add_group_testhost01_idem - assert: that: - add_testhost01 is changed - add_testhost01_idem is not changed - hostvar_testhost01 is changed - hostvar_testhost01_idem is not changed - add_testhost02 is changed - add_testhost02_idem is not changed - add_group_testhost02 is changed - add_group_testhost01 is changed - add_group_testhost02_idem is not changed - add_group_testhost01_idem is not changed - groups['testhostgroup']|length == 2 - "'testhost01' in groups['testhostgroup']" - "'testhost02' in groups['testhostgroup']" - hostvars['testhost01']['foo'] == 'bar' - name: Give invalid input add_host: namenewdynamichost groupsnewdynamicgroup a_varfromadd_host ignore_errors: true register: badinput - name: verify we detected bad input assert: that: - badinput is failed
closed
ansible/ansible
https://github.com/ansible/ansible
75,971
add_host module cannot handle a variable in a condition
### Summary When I define `failed_when:` condition in the task using `add_host`, it caused an error that the variable was not found. ### Issue Type Bug Report ### Component Name task ### Ansible Version ```console $ ansible --version ansible [core 2.11.2] config file = /etc/ansible/ansible.cfg configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.8/site-packages/ansible ansible collection location = /var/lib/awx/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)] jinja version = 2.10.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed $ (nothing) ``` ### OS / Environment $ cat /etc/redhat-release Red Hat Enterprise Linux release 8.4 (Ootpa) ### Steps to Reproduce ### pattern 1: using `add_host` module ``` --- - hosts: localhost gather_facts: false vars: test: 1 tasks: - add_host: name: "myhost" groups: "mygroup" ansible_host: "host46" ansible_ssh_host: "192.168.100.100" failed_when: test == 2 ``` When I ran the above playbook, it failed with error as below even the variable 'test' was defined in the play. ``` $ ansible-playbook -i localhost, conditional.yml PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [add_host] ********************************************************************************************************************************************************************************** ERROR! The conditional check 'test == 2' failed. The error was: error while evaluating conditional (test == 2): 'test' is undefined ``` If I passed the variable as an extra_vars and specified to be true, it worked as expected. ``` $ ansible-playbook -i localhost, conditional.yml -e "{test: 2}" PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [add_host] ********************************************************************************************************************************************************************************** fatal: [localhost]: FAILED! => {"add_host": {"groups": ["mygroup"], "host_name": "myhost", "host_vars": {"ansible_host": "host46", "ansible_ssh_host": "192.168.100.100"}}, "changed": false, "failed_when_result": true} NO MORE HOSTS LEFT ******************************************************************************************************************************************************************************* PLAY RECAP *************************************************************************************************************************************************************************************** localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` On the other hand when "{test: 1}", it failed. ``` $ ansible-playbook -i localhost, conditional.yml -e "{test: 1}" PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [add_host] ********************************************************************************************************************************************************************************** ERROR! The conditional check 'test == 2' failed. The error was: error while evaluating conditional (test == 2): 'test' is undefined ``` ### pattern 2: using `debug` module, it worked as desired ``` --- - hosts: localhost gather_facts: false vars: test: 1 tasks: - debug: msg: "debug message" failed_when: test == 2 ``` It works. ``` $ ansible-playbook -i localhost, debug.yml PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [debug] ************************************************************************************************************************************************************************************* ok: [localhost] => { "msg": "debug message" } PLAY RECAP *************************************************************************************************************************************************************************************** localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` When passed `2`, it failed as expected. ``` $ ansible-playbook -i localhost, debug.yml -e "{test: 2}" PLAY [localhost] ********************************************************************************************************************************************************************************* TASK [debug] ************************************************************************************************************************************************************************************* fatal: [localhost]: FAILED! => { "msg": "debug message" } PLAY RECAP *************************************************************************************************************************************************************************************** localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ### Expected Results The `pattern 1` should work as the `pattern 2` from the point of view of handling the conditional. ### Actual Results ```console (already pasted above) ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/75971
https://github.com/ansible/ansible/pull/71719
2749d9fbf9242a59ed87f46ea057d84f4768a93e
394d216922d70709248a60f58da300f1e70f5894
2021-10-08T06:47:15Z
python
2022-02-04T11:35:23Z
test/integration/targets/changed_when/tasks/main.yml
# test code for the changed_when parameter # (c) 2014, James Tanner <[email protected]> # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. - name: ensure shell is always changed shell: ls -al /tmp register: shell_result - debug: var=shell_result - name: changed should always be true for shell assert: that: - "shell_result.changed" - name: test changed_when override for shell shell: ls -al /tmp changed_when: False register: shell_result - debug: var=shell_result - name: changed should be false assert: that: - "not shell_result.changed" - name: Add hosts to test group and ensure it appears as changed group_by: key: "cw_test1_{{ inventory_hostname }}" register: groupby - name: verify its changed assert: that: - groupby is changed - name: Add hosts to test group and ensure it does NOT appear as changed group_by: key: "cw_test2_{{ inventory_hostname }}" changed_when: False register: groupby - name: verify its not changed assert: that: - groupby is not changed - name: invalid conditional command: echo foo changed_when: boomboomboom register: invalid_conditional ignore_errors: true - assert: that: - invalid_conditional is failed - invalid_conditional.stdout is defined - invalid_conditional.changed_when_result is contains('boomboomboom')
closed
ansible/ansible
https://github.com/ansible/ansible
71,627
add_host module no longer returns changed=true, errors with changed_when:
##### SUMMARY The behavior in Ansible 2.9 was that `add_host:` always came back as changed. Maybe if the host already existed it should return ok, but I expect that when a host is actually created it is changed. Sometime between 2.9 and 2.11 it started to always return ok. Trying to make it changed with `changed_when: true` produces a traceback. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME add_host module ##### ANSIBLE VERSION ``` $ ansible --version [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible 2.11.0.dev0 config file = /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/awx_collection/bin/ansible python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] ``` ##### CONFIGURATION ``` $ ansible-config dump --only-changed [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ANSIBLE_NOCOWS(env: ANSIBLE_NOCOWS) = True DEPRECATION_WARNINGS(/Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg) = False ``` ##### OS / ENVIRONMENT Local action, Mac OS ##### STEPS TO REPRODUCE Run this playbook with >=1 host in the inventory ```yaml --- - name: add hosts to inventory hosts: all gather_facts: false connection: local vars: num_hosts: 10 tasks: - name: create inventory add_host: name: 'host-{{item}}' groups: dynamic ansible_connection: local host_id: '{{item}}' with_sequence: start=1 end={{num_hosts}} format=%d # changed_when: true notify: - single host handler handlers: - name: single host handler command: 'true' ``` ##### EXPECTED RESULTS I expect the tasks to be changed, and I expect the handler to be ran. look at output from Ansible 2.9 ``` $ ansible-playbook -i host1, dynamic_inventory.yml PLAY [add hosts to inventory] ******************************************************************************************************************************************************* TASK [create inventory] ************************************************************************************************************************************************************* changed: [host1] => (item=1) changed: [host1] => (item=2) changed: [host1] => (item=3) changed: [host1] => (item=4) changed: [host1] => (item=5) changed: [host1] => (item=6) changed: [host1] => (item=7) changed: [host1] => (item=8) changed: [host1] => (item=9) changed: [host1] => (item=10) RUNNING HANDLER [single host handler] *********************************************************************************************************************************************** [WARNING]: Platform darwin on host host1 is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. changed: [host1] PLAY RECAP ************************************************************************************************************************************************************************** host1 : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` ##### ACTUAL RESULTS with `# changed_when: true` left as a comment ``` $ ansible-playbook -i host1, dynamic_inventory.yml [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. PLAY [add hosts to inventory] ******************************************************************************************************************************************************* TASK [create inventory] ************************************************************************************************************************************************************* ok: [host1] => (item=1) ok: [host1] => (item=2) ok: [host1] => (item=3) ok: [host1] => (item=4) ok: [host1] => (item=5) ok: [host1] => (item=6) ok: [host1] => (item=7) ok: [host1] => (item=8) ok: [host1] => (item=9) ok: [host1] => (item=10) PLAY RECAP ************************************************************************************************************************************************************************** host1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` no handler is ran When uncommenting `changed_when: true` ``` $ ansible-playbook -i host1, dynamic_inventory.yml -vvv [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible-playbook 2.11.0.dev0 config file = /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/awx_collection/bin/ansible-playbook python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] Using /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg as config file Parsed host1, inventory source with host_list plugin PLAYBOOK: dynamic_inventory.yml ***************************************************************************************************************************************************** 1 plays in dynamic_inventory.yml PLAY [add hosts to inventory] ******************************************************************************************************************************************************* META: ran handlers TASK [create inventory] ************************************************************************************************************************************************************* task path: /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/dynamic_inventory.yml:9 creating host via 'add_host': hostname=host-1 changed: [host1] => (item=1) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-1", "host_vars": { "ansible_connection": "local", "host_id": "1" } }, "ansible_loop_var": "item", "changed": true, "item": "1" } creating host via 'add_host': hostname=host-2 changed: [host1] => (item=2) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-2", "host_vars": { "ansible_connection": "local", "host_id": "2" } }, "ansible_loop_var": "item", "changed": true, "item": "2" } creating host via 'add_host': hostname=host-3 changed: [host1] => (item=3) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-3", "host_vars": { "ansible_connection": "local", "host_id": "3" } }, "ansible_loop_var": "item", "changed": true, "item": "3" } creating host via 'add_host': hostname=host-4 changed: [host1] => (item=4) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-4", "host_vars": { "ansible_connection": "local", "host_id": "4" } }, "ansible_loop_var": "item", "changed": true, "item": "4" } creating host via 'add_host': hostname=host-5 changed: [host1] => (item=5) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-5", "host_vars": { "ansible_connection": "local", "host_id": "5" } }, "ansible_loop_var": "item", "changed": true, "item": "5" } creating host via 'add_host': hostname=host-6 changed: [host1] => (item=6) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-6", "host_vars": { "ansible_connection": "local", "host_id": "6" } }, "ansible_loop_var": "item", "changed": true, "item": "6" } creating host via 'add_host': hostname=host-7 changed: [host1] => (item=7) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-7", "host_vars": { "ansible_connection": "local", "host_id": "7" } }, "ansible_loop_var": "item", "changed": true, "item": "7" } creating host via 'add_host': hostname=host-8 changed: [host1] => (item=8) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-8", "host_vars": { "ansible_connection": "local", "host_id": "8" } }, "ansible_loop_var": "item", "changed": true, "item": "8" } creating host via 'add_host': hostname=host-9 changed: [host1] => (item=9) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-9", "host_vars": { "ansible_connection": "local", "host_id": "9" } }, "ansible_loop_var": "item", "changed": true, "item": "9" } creating host via 'add_host': hostname=host-10 changed: [host1] => (item=10) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-10", "host_vars": { "ansible_connection": "local", "host_id": "10" } }, "ansible_loop_var": "item", "changed": true, "item": "10" } NOTIFIED HANDLER single host handler for host1 ERROR! Unexpected Exception, this is probably a bug: local variable 'conditional' referenced before assignment the full traceback was: Traceback (most recent call last): File "/Users/alancoding/Documents/repos/ansible/lib/ansible/playbook/conditional.py", line 93, in evaluate_conditional for conditional in self.when: TypeError: 'bool' object is not iterable During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/alancoding/Documents/repos/ansible/bin/ansible-playbook", line 125, in <module> exit_code = cli.run() File "/Users/alancoding/Documents/repos/ansible/lib/ansible/cli/playbook.py", line 128, in run results = pbex.run() File "/Users/alancoding/Documents/repos/ansible/lib/ansible/executor/playbook_executor.py", line 169, in run result = self._tqm.run(play=play) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/executor/task_queue_manager.py", line 292, in run play_return = strategy.run(iterator, play_context) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/linear.py", line 329, in run results += self._wait_on_pending_results(iterator) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 804, in _wait_on_pending_results results = self._process_pending_results(iterator) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 129, in inner results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 661, in _process_pending_results post_process_whens(result_item, original_task, handler_templar) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 80, in post_process_whens result['changed'] = cond.evaluate_conditional(templar, templar.available_variables) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/playbook/conditional.py", line 112, in evaluate_conditional raise AnsibleError("The conditional check '%s' failed. The error was: %s" % (to_native(conditional), to_native(e)), obj=ds) UnboundLocalError: local variable 'conditional' referenced before assignment ```
https://github.com/ansible/ansible/issues/71627
https://github.com/ansible/ansible/pull/71719
2749d9fbf9242a59ed87f46ea057d84f4768a93e
394d216922d70709248a60f58da300f1e70f5894
2020-09-04T00:28:43Z
python
2022-02-04T11:35:23Z
changelogs/fragments/71627-add_host-group_by-fix-changed_when-in-loop.yml
closed
ansible/ansible
https://github.com/ansible/ansible
71,627
add_host module no longer returns changed=true, errors with changed_when:
##### SUMMARY The behavior in Ansible 2.9 was that `add_host:` always came back as changed. Maybe if the host already existed it should return ok, but I expect that when a host is actually created it is changed. Sometime between 2.9 and 2.11 it started to always return ok. Trying to make it changed with `changed_when: true` produces a traceback. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME add_host module ##### ANSIBLE VERSION ``` $ ansible --version [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible 2.11.0.dev0 config file = /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/awx_collection/bin/ansible python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] ``` ##### CONFIGURATION ``` $ ansible-config dump --only-changed [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ANSIBLE_NOCOWS(env: ANSIBLE_NOCOWS) = True DEPRECATION_WARNINGS(/Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg) = False ``` ##### OS / ENVIRONMENT Local action, Mac OS ##### STEPS TO REPRODUCE Run this playbook with >=1 host in the inventory ```yaml --- - name: add hosts to inventory hosts: all gather_facts: false connection: local vars: num_hosts: 10 tasks: - name: create inventory add_host: name: 'host-{{item}}' groups: dynamic ansible_connection: local host_id: '{{item}}' with_sequence: start=1 end={{num_hosts}} format=%d # changed_when: true notify: - single host handler handlers: - name: single host handler command: 'true' ``` ##### EXPECTED RESULTS I expect the tasks to be changed, and I expect the handler to be ran. look at output from Ansible 2.9 ``` $ ansible-playbook -i host1, dynamic_inventory.yml PLAY [add hosts to inventory] ******************************************************************************************************************************************************* TASK [create inventory] ************************************************************************************************************************************************************* changed: [host1] => (item=1) changed: [host1] => (item=2) changed: [host1] => (item=3) changed: [host1] => (item=4) changed: [host1] => (item=5) changed: [host1] => (item=6) changed: [host1] => (item=7) changed: [host1] => (item=8) changed: [host1] => (item=9) changed: [host1] => (item=10) RUNNING HANDLER [single host handler] *********************************************************************************************************************************************** [WARNING]: Platform darwin on host host1 is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. changed: [host1] PLAY RECAP ************************************************************************************************************************************************************************** host1 : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` ##### ACTUAL RESULTS with `# changed_when: true` left as a comment ``` $ ansible-playbook -i host1, dynamic_inventory.yml [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. PLAY [add hosts to inventory] ******************************************************************************************************************************************************* TASK [create inventory] ************************************************************************************************************************************************************* ok: [host1] => (item=1) ok: [host1] => (item=2) ok: [host1] => (item=3) ok: [host1] => (item=4) ok: [host1] => (item=5) ok: [host1] => (item=6) ok: [host1] => (item=7) ok: [host1] => (item=8) ok: [host1] => (item=9) ok: [host1] => (item=10) PLAY RECAP ************************************************************************************************************************************************************************** host1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` no handler is ran When uncommenting `changed_when: true` ``` $ ansible-playbook -i host1, dynamic_inventory.yml -vvv [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible-playbook 2.11.0.dev0 config file = /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/awx_collection/bin/ansible-playbook python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] Using /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg as config file Parsed host1, inventory source with host_list plugin PLAYBOOK: dynamic_inventory.yml ***************************************************************************************************************************************************** 1 plays in dynamic_inventory.yml PLAY [add hosts to inventory] ******************************************************************************************************************************************************* META: ran handlers TASK [create inventory] ************************************************************************************************************************************************************* task path: /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/dynamic_inventory.yml:9 creating host via 'add_host': hostname=host-1 changed: [host1] => (item=1) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-1", "host_vars": { "ansible_connection": "local", "host_id": "1" } }, "ansible_loop_var": "item", "changed": true, "item": "1" } creating host via 'add_host': hostname=host-2 changed: [host1] => (item=2) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-2", "host_vars": { "ansible_connection": "local", "host_id": "2" } }, "ansible_loop_var": "item", "changed": true, "item": "2" } creating host via 'add_host': hostname=host-3 changed: [host1] => (item=3) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-3", "host_vars": { "ansible_connection": "local", "host_id": "3" } }, "ansible_loop_var": "item", "changed": true, "item": "3" } creating host via 'add_host': hostname=host-4 changed: [host1] => (item=4) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-4", "host_vars": { "ansible_connection": "local", "host_id": "4" } }, "ansible_loop_var": "item", "changed": true, "item": "4" } creating host via 'add_host': hostname=host-5 changed: [host1] => (item=5) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-5", "host_vars": { "ansible_connection": "local", "host_id": "5" } }, "ansible_loop_var": "item", "changed": true, "item": "5" } creating host via 'add_host': hostname=host-6 changed: [host1] => (item=6) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-6", "host_vars": { "ansible_connection": "local", "host_id": "6" } }, "ansible_loop_var": "item", "changed": true, "item": "6" } creating host via 'add_host': hostname=host-7 changed: [host1] => (item=7) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-7", "host_vars": { "ansible_connection": "local", "host_id": "7" } }, "ansible_loop_var": "item", "changed": true, "item": "7" } creating host via 'add_host': hostname=host-8 changed: [host1] => (item=8) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-8", "host_vars": { "ansible_connection": "local", "host_id": "8" } }, "ansible_loop_var": "item", "changed": true, "item": "8" } creating host via 'add_host': hostname=host-9 changed: [host1] => (item=9) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-9", "host_vars": { "ansible_connection": "local", "host_id": "9" } }, "ansible_loop_var": "item", "changed": true, "item": "9" } creating host via 'add_host': hostname=host-10 changed: [host1] => (item=10) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-10", "host_vars": { "ansible_connection": "local", "host_id": "10" } }, "ansible_loop_var": "item", "changed": true, "item": "10" } NOTIFIED HANDLER single host handler for host1 ERROR! Unexpected Exception, this is probably a bug: local variable 'conditional' referenced before assignment the full traceback was: Traceback (most recent call last): File "/Users/alancoding/Documents/repos/ansible/lib/ansible/playbook/conditional.py", line 93, in evaluate_conditional for conditional in self.when: TypeError: 'bool' object is not iterable During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/alancoding/Documents/repos/ansible/bin/ansible-playbook", line 125, in <module> exit_code = cli.run() File "/Users/alancoding/Documents/repos/ansible/lib/ansible/cli/playbook.py", line 128, in run results = pbex.run() File "/Users/alancoding/Documents/repos/ansible/lib/ansible/executor/playbook_executor.py", line 169, in run result = self._tqm.run(play=play) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/executor/task_queue_manager.py", line 292, in run play_return = strategy.run(iterator, play_context) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/linear.py", line 329, in run results += self._wait_on_pending_results(iterator) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 804, in _wait_on_pending_results results = self._process_pending_results(iterator) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 129, in inner results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 661, in _process_pending_results post_process_whens(result_item, original_task, handler_templar) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 80, in post_process_whens result['changed'] = cond.evaluate_conditional(templar, templar.available_variables) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/playbook/conditional.py", line 112, in evaluate_conditional raise AnsibleError("The conditional check '%s' failed. The error was: %s" % (to_native(conditional), to_native(e)), obj=ds) UnboundLocalError: local variable 'conditional' referenced before assignment ```
https://github.com/ansible/ansible/issues/71627
https://github.com/ansible/ansible/pull/71719
2749d9fbf9242a59ed87f46ea057d84f4768a93e
394d216922d70709248a60f58da300f1e70f5894
2020-09-04T00:28:43Z
python
2022-02-04T11:35:23Z
lib/ansible/constants.py
# Copyright: (c) 2012-2014, Michael DeHaan <[email protected]> # Copyright: (c) 2017, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import re from ast import literal_eval from jinja2 import Template from string import ascii_letters, digits from ansible.config.manager import ConfigManager, ensure_type from ansible.module_utils._text import to_text from ansible.module_utils.common.collections import Sequence from ansible.module_utils.parsing.convert_bool import BOOLEANS_TRUE from ansible.module_utils.six import string_types from ansible.release import __version__ from ansible.utils.fqcn import add_internal_fqcns def _warning(msg): ''' display is not guaranteed here, nor it being the full class, but try anyways, fallback to sys.stderr.write ''' try: from ansible.utils.display import Display Display().warning(msg) except Exception: import sys sys.stderr.write(' [WARNING] %s\n' % (msg)) def _deprecated(msg, version): ''' display is not guaranteed here, nor it being the full class, but try anyways, fallback to sys.stderr.write ''' try: from ansible.utils.display import Display Display().deprecated(msg, version=version) except Exception: import sys sys.stderr.write(' [DEPRECATED] %s, to be removed in %s\n' % (msg, version)) def set_constant(name, value, export=vars()): ''' sets constants and returns resolved options dict ''' export[name] = value class _DeprecatedSequenceConstant(Sequence): def __init__(self, value, msg, version): self._value = value self._msg = msg self._version = version def __len__(self): _deprecated(self._msg, self._version) return len(self._value) def __getitem__(self, y): _deprecated(self._msg, self._version) return self._value[y] # CONSTANTS ### yes, actual ones # The following are hard-coded action names _ACTION_DEBUG = add_internal_fqcns(('debug', )) _ACTION_IMPORT_PLAYBOOK = add_internal_fqcns(('import_playbook', )) _ACTION_IMPORT_ROLE = add_internal_fqcns(('import_role', )) _ACTION_IMPORT_TASKS = add_internal_fqcns(('import_tasks', )) _ACTION_INCLUDE = add_internal_fqcns(('include', )) _ACTION_INCLUDE_ROLE = add_internal_fqcns(('include_role', )) _ACTION_INCLUDE_TASKS = add_internal_fqcns(('include_tasks', )) _ACTION_INCLUDE_VARS = add_internal_fqcns(('include_vars', )) _ACTION_META = add_internal_fqcns(('meta', )) _ACTION_SET_FACT = add_internal_fqcns(('set_fact', )) _ACTION_SETUP = add_internal_fqcns(('setup', )) _ACTION_HAS_CMD = add_internal_fqcns(('command', 'shell', 'script')) _ACTION_ALLOWS_RAW_ARGS = _ACTION_HAS_CMD + add_internal_fqcns(('raw', )) _ACTION_ALL_INCLUDES = _ACTION_INCLUDE + _ACTION_INCLUDE_TASKS + _ACTION_INCLUDE_ROLE _ACTION_ALL_INCLUDE_IMPORT_TASKS = _ACTION_INCLUDE + _ACTION_INCLUDE_TASKS + _ACTION_IMPORT_TASKS _ACTION_ALL_PROPER_INCLUDE_IMPORT_ROLES = _ACTION_INCLUDE_ROLE + _ACTION_IMPORT_ROLE _ACTION_ALL_PROPER_INCLUDE_IMPORT_TASKS = _ACTION_INCLUDE_TASKS + _ACTION_IMPORT_TASKS _ACTION_ALL_INCLUDE_ROLE_TASKS = _ACTION_INCLUDE_ROLE + _ACTION_INCLUDE_TASKS _ACTION_ALL_INCLUDE_TASKS = _ACTION_INCLUDE + _ACTION_INCLUDE_TASKS _ACTION_FACT_GATHERING = _ACTION_SETUP + add_internal_fqcns(('gather_facts', )) _ACTION_WITH_CLEAN_FACTS = _ACTION_SET_FACT + _ACTION_INCLUDE_VARS # http://nezzen.net/2008/06/23/colored-text-in-python-using-ansi-escape-sequences/ COLOR_CODES = { 'black': u'0;30', 'bright gray': u'0;37', 'blue': u'0;34', 'white': u'1;37', 'green': u'0;32', 'bright blue': u'1;34', 'cyan': u'0;36', 'bright green': u'1;32', 'red': u'0;31', 'bright cyan': u'1;36', 'purple': u'0;35', 'bright red': u'1;31', 'yellow': u'0;33', 'bright purple': u'1;35', 'dark gray': u'1;30', 'bright yellow': u'1;33', 'magenta': u'0;35', 'bright magenta': u'1;35', 'normal': u'0', } REJECT_EXTS = ('.pyc', '.pyo', '.swp', '.bak', '~', '.rpm', '.md', '.txt', '.rst') BOOL_TRUE = BOOLEANS_TRUE COLLECTION_PTYPE_COMPAT = {'module': 'modules'} DEFAULT_BECOME_PASS = None DEFAULT_PASSWORD_CHARS = to_text(ascii_letters + digits + ".,:-_", errors='strict') # characters included in auto-generated passwords DEFAULT_REMOTE_PASS = None DEFAULT_SUBSET = None # FIXME: expand to other plugins, but never doc fragments CONFIGURABLE_PLUGINS = ('become', 'cache', 'callback', 'cliconf', 'connection', 'httpapi', 'inventory', 'lookup', 'netconf', 'shell', 'vars') # NOTE: always update the docs/docsite/Makefile to match DOCUMENTABLE_PLUGINS = CONFIGURABLE_PLUGINS + ('module', 'strategy') IGNORE_FILES = ("COPYING", "CONTRIBUTING", "LICENSE", "README", "VERSION", "GUIDELINES") # ignore during module search INTERNAL_RESULT_KEYS = ('add_host', 'add_group') LOCALHOST = ('127.0.0.1', 'localhost', '::1') MODULE_REQUIRE_ARGS = tuple(add_internal_fqcns(('command', 'win_command', 'ansible.windows.win_command', 'shell', 'win_shell', 'ansible.windows.win_shell', 'raw', 'script'))) MODULE_NO_JSON = tuple(add_internal_fqcns(('command', 'win_command', 'ansible.windows.win_command', 'shell', 'win_shell', 'ansible.windows.win_shell', 'raw'))) RESTRICTED_RESULT_KEYS = ('ansible_rsync_path', 'ansible_playbook_python', 'ansible_facts') TREE_DIR = None VAULT_VERSION_MIN = 1.0 VAULT_VERSION_MAX = 1.0 # This matches a string that cannot be used as a valid python variable name i.e 'not-valid', 'not!valid@either' '1_nor_This' INVALID_VARIABLE_NAMES = re.compile(r'^[\d\W]|[^\w]') # FIXME: remove once play_context mangling is removed # the magic variable mapping dictionary below is used to translate # host/inventory variables to fields in the PlayContext # object. The dictionary values are tuples, to account for aliases # in variable names. COMMON_CONNECTION_VARS = frozenset(('ansible_connection', 'ansible_host', 'ansible_user', 'ansible_shell_executable', 'ansible_port', 'ansible_pipelining', 'ansible_password', 'ansible_timeout', 'ansible_shell_type', 'ansible_module_compression', 'ansible_private_key_file')) MAGIC_VARIABLE_MAPPING = dict( # base connection=('ansible_connection', ), module_compression=('ansible_module_compression', ), shell=('ansible_shell_type', ), executable=('ansible_shell_executable', ), # connection common remote_addr=('ansible_ssh_host', 'ansible_host'), remote_user=('ansible_ssh_user', 'ansible_user'), password=('ansible_ssh_pass', 'ansible_password'), port=('ansible_ssh_port', 'ansible_port'), pipelining=('ansible_ssh_pipelining', 'ansible_pipelining'), timeout=('ansible_ssh_timeout', 'ansible_timeout'), private_key_file=('ansible_ssh_private_key_file', 'ansible_private_key_file'), # networking modules network_os=('ansible_network_os', ), connection_user=('ansible_connection_user',), # ssh TODO: remove ssh_executable=('ansible_ssh_executable', ), ssh_common_args=('ansible_ssh_common_args', ), sftp_extra_args=('ansible_sftp_extra_args', ), scp_extra_args=('ansible_scp_extra_args', ), ssh_extra_args=('ansible_ssh_extra_args', ), ssh_transfer_method=('ansible_ssh_transfer_method', ), # docker TODO: remove docker_extra_args=('ansible_docker_extra_args', ), # become become=('ansible_become', ), become_method=('ansible_become_method', ), become_user=('ansible_become_user', ), become_pass=('ansible_become_password', 'ansible_become_pass'), become_exe=('ansible_become_exe', ), become_flags=('ansible_become_flags', ), ) # POPULATE SETTINGS FROM CONFIG ### config = ConfigManager() # Generate constants from config for setting in config.data.get_settings(): value = setting.value if setting.origin == 'default' and \ isinstance(setting.value, string_types) and \ (setting.value.startswith('{{') and setting.value.endswith('}}')): try: t = Template(setting.value) value = t.render(vars()) try: value = literal_eval(value) except ValueError: pass # not a python data structure except Exception: pass # not templatable value = ensure_type(value, setting.type) set_constant(setting.name, value) for warn in config.WARNINGS: _warning(warn)
closed
ansible/ansible
https://github.com/ansible/ansible
71,627
add_host module no longer returns changed=true, errors with changed_when:
##### SUMMARY The behavior in Ansible 2.9 was that `add_host:` always came back as changed. Maybe if the host already existed it should return ok, but I expect that when a host is actually created it is changed. Sometime between 2.9 and 2.11 it started to always return ok. Trying to make it changed with `changed_when: true` produces a traceback. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME add_host module ##### ANSIBLE VERSION ``` $ ansible --version [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible 2.11.0.dev0 config file = /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/awx_collection/bin/ansible python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] ``` ##### CONFIGURATION ``` $ ansible-config dump --only-changed [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ANSIBLE_NOCOWS(env: ANSIBLE_NOCOWS) = True DEPRECATION_WARNINGS(/Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg) = False ``` ##### OS / ENVIRONMENT Local action, Mac OS ##### STEPS TO REPRODUCE Run this playbook with >=1 host in the inventory ```yaml --- - name: add hosts to inventory hosts: all gather_facts: false connection: local vars: num_hosts: 10 tasks: - name: create inventory add_host: name: 'host-{{item}}' groups: dynamic ansible_connection: local host_id: '{{item}}' with_sequence: start=1 end={{num_hosts}} format=%d # changed_when: true notify: - single host handler handlers: - name: single host handler command: 'true' ``` ##### EXPECTED RESULTS I expect the tasks to be changed, and I expect the handler to be ran. look at output from Ansible 2.9 ``` $ ansible-playbook -i host1, dynamic_inventory.yml PLAY [add hosts to inventory] ******************************************************************************************************************************************************* TASK [create inventory] ************************************************************************************************************************************************************* changed: [host1] => (item=1) changed: [host1] => (item=2) changed: [host1] => (item=3) changed: [host1] => (item=4) changed: [host1] => (item=5) changed: [host1] => (item=6) changed: [host1] => (item=7) changed: [host1] => (item=8) changed: [host1] => (item=9) changed: [host1] => (item=10) RUNNING HANDLER [single host handler] *********************************************************************************************************************************************** [WARNING]: Platform darwin on host host1 is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. changed: [host1] PLAY RECAP ************************************************************************************************************************************************************************** host1 : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` ##### ACTUAL RESULTS with `# changed_when: true` left as a comment ``` $ ansible-playbook -i host1, dynamic_inventory.yml [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. PLAY [add hosts to inventory] ******************************************************************************************************************************************************* TASK [create inventory] ************************************************************************************************************************************************************* ok: [host1] => (item=1) ok: [host1] => (item=2) ok: [host1] => (item=3) ok: [host1] => (item=4) ok: [host1] => (item=5) ok: [host1] => (item=6) ok: [host1] => (item=7) ok: [host1] => (item=8) ok: [host1] => (item=9) ok: [host1] => (item=10) PLAY RECAP ************************************************************************************************************************************************************************** host1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` no handler is ran When uncommenting `changed_when: true` ``` $ ansible-playbook -i host1, dynamic_inventory.yml -vvv [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible-playbook 2.11.0.dev0 config file = /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/awx_collection/bin/ansible-playbook python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] Using /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg as config file Parsed host1, inventory source with host_list plugin PLAYBOOK: dynamic_inventory.yml ***************************************************************************************************************************************************** 1 plays in dynamic_inventory.yml PLAY [add hosts to inventory] ******************************************************************************************************************************************************* META: ran handlers TASK [create inventory] ************************************************************************************************************************************************************* task path: /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/dynamic_inventory.yml:9 creating host via 'add_host': hostname=host-1 changed: [host1] => (item=1) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-1", "host_vars": { "ansible_connection": "local", "host_id": "1" } }, "ansible_loop_var": "item", "changed": true, "item": "1" } creating host via 'add_host': hostname=host-2 changed: [host1] => (item=2) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-2", "host_vars": { "ansible_connection": "local", "host_id": "2" } }, "ansible_loop_var": "item", "changed": true, "item": "2" } creating host via 'add_host': hostname=host-3 changed: [host1] => (item=3) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-3", "host_vars": { "ansible_connection": "local", "host_id": "3" } }, "ansible_loop_var": "item", "changed": true, "item": "3" } creating host via 'add_host': hostname=host-4 changed: [host1] => (item=4) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-4", "host_vars": { "ansible_connection": "local", "host_id": "4" } }, "ansible_loop_var": "item", "changed": true, "item": "4" } creating host via 'add_host': hostname=host-5 changed: [host1] => (item=5) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-5", "host_vars": { "ansible_connection": "local", "host_id": "5" } }, "ansible_loop_var": "item", "changed": true, "item": "5" } creating host via 'add_host': hostname=host-6 changed: [host1] => (item=6) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-6", "host_vars": { "ansible_connection": "local", "host_id": "6" } }, "ansible_loop_var": "item", "changed": true, "item": "6" } creating host via 'add_host': hostname=host-7 changed: [host1] => (item=7) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-7", "host_vars": { "ansible_connection": "local", "host_id": "7" } }, "ansible_loop_var": "item", "changed": true, "item": "7" } creating host via 'add_host': hostname=host-8 changed: [host1] => (item=8) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-8", "host_vars": { "ansible_connection": "local", "host_id": "8" } }, "ansible_loop_var": "item", "changed": true, "item": "8" } creating host via 'add_host': hostname=host-9 changed: [host1] => (item=9) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-9", "host_vars": { "ansible_connection": "local", "host_id": "9" } }, "ansible_loop_var": "item", "changed": true, "item": "9" } creating host via 'add_host': hostname=host-10 changed: [host1] => (item=10) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-10", "host_vars": { "ansible_connection": "local", "host_id": "10" } }, "ansible_loop_var": "item", "changed": true, "item": "10" } NOTIFIED HANDLER single host handler for host1 ERROR! Unexpected Exception, this is probably a bug: local variable 'conditional' referenced before assignment the full traceback was: Traceback (most recent call last): File "/Users/alancoding/Documents/repos/ansible/lib/ansible/playbook/conditional.py", line 93, in evaluate_conditional for conditional in self.when: TypeError: 'bool' object is not iterable During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/alancoding/Documents/repos/ansible/bin/ansible-playbook", line 125, in <module> exit_code = cli.run() File "/Users/alancoding/Documents/repos/ansible/lib/ansible/cli/playbook.py", line 128, in run results = pbex.run() File "/Users/alancoding/Documents/repos/ansible/lib/ansible/executor/playbook_executor.py", line 169, in run result = self._tqm.run(play=play) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/executor/task_queue_manager.py", line 292, in run play_return = strategy.run(iterator, play_context) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/linear.py", line 329, in run results += self._wait_on_pending_results(iterator) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 804, in _wait_on_pending_results results = self._process_pending_results(iterator) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 129, in inner results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 661, in _process_pending_results post_process_whens(result_item, original_task, handler_templar) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 80, in post_process_whens result['changed'] = cond.evaluate_conditional(templar, templar.available_variables) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/playbook/conditional.py", line 112, in evaluate_conditional raise AnsibleError("The conditional check '%s' failed. The error was: %s" % (to_native(conditional), to_native(e)), obj=ds) UnboundLocalError: local variable 'conditional' referenced before assignment ```
https://github.com/ansible/ansible/issues/71627
https://github.com/ansible/ansible/pull/71719
2749d9fbf9242a59ed87f46ea057d84f4768a93e
394d216922d70709248a60f58da300f1e70f5894
2020-09-04T00:28:43Z
python
2022-02-04T11:35:23Z
lib/ansible/executor/task_executor.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import os import pty import time import json import signal import subprocess import sys import termios import traceback from ansible import constants as C from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleConnectionFailure, AnsibleActionFail, AnsibleActionSkip from ansible.executor.task_result import TaskResult from ansible.executor.module_common import get_action_args_with_defaults from ansible.module_utils.parsing.convert_bool import boolean from ansible.module_utils.six import binary_type from ansible.module_utils._text import to_text, to_native from ansible.module_utils.connection import write_to_file_descriptor from ansible.playbook.conditional import Conditional from ansible.playbook.task import Task from ansible.plugins.loader import become_loader, cliconf_loader, connection_loader, httpapi_loader, netconf_loader, terminal_loader from ansible.template import Templar from ansible.utils.collection_loader import AnsibleCollectionConfig from ansible.utils.listify import listify_lookup_plugin_terms from ansible.utils.unsafe_proxy import to_unsafe_text, wrap_var from ansible.vars.clean import namespace_facts, clean_facts from ansible.utils.display import Display from ansible.utils.vars import combine_vars, isidentifier display = Display() RETURN_VARS = [x for x in C.MAGIC_VARIABLE_MAPPING.items() if 'become' not in x and '_pass' not in x] __all__ = ['TaskExecutor'] class TaskTimeoutError(BaseException): pass def task_timeout(signum, frame): raise TaskTimeoutError def remove_omit(task_args, omit_token): ''' Remove args with a value equal to the ``omit_token`` recursively to align with now having suboptions in the argument_spec ''' if not isinstance(task_args, dict): return task_args new_args = {} for i in task_args.items(): if i[1] == omit_token: continue elif isinstance(i[1], dict): new_args[i[0]] = remove_omit(i[1], omit_token) elif isinstance(i[1], list): new_args[i[0]] = [remove_omit(v, omit_token) for v in i[1]] else: new_args[i[0]] = i[1] return new_args class TaskExecutor: ''' This is the main worker class for the executor pipeline, which handles loading an action plugin to actually dispatch the task to a given host. This class roughly corresponds to the old Runner() class. ''' def __init__(self, host, task, job_vars, play_context, new_stdin, loader, shared_loader_obj, final_q): self._host = host self._task = task self._job_vars = job_vars self._play_context = play_context self._new_stdin = new_stdin self._loader = loader self._shared_loader_obj = shared_loader_obj self._connection = None self._final_q = final_q self._loop_eval_error = None self._task.squash() def run(self): ''' The main executor entrypoint, where we determine if the specified task requires looping and either runs the task with self._run_loop() or self._execute(). After that, the returned results are parsed and returned as a dict. ''' display.debug("in run() - task %s" % self._task._uuid) try: try: items = self._get_loop_items() except AnsibleUndefinedVariable as e: # save the error raised here for use later items = None self._loop_eval_error = e if items is not None: if len(items) > 0: item_results = self._run_loop(items) # create the overall result item res = dict(results=item_results) # loop through the item results and set the global changed/failed/skipped result flags based on any item. res['skipped'] = True for item in item_results: if 'changed' in item and item['changed'] and not res.get('changed'): res['changed'] = True if res['skipped'] and ('skipped' not in item or ('skipped' in item and not item['skipped'])): res['skipped'] = False if 'failed' in item and item['failed']: item_ignore = item.pop('_ansible_ignore_errors') if not res.get('failed'): res['failed'] = True res['msg'] = 'One or more items failed' self._task.ignore_errors = item_ignore elif self._task.ignore_errors and not item_ignore: self._task.ignore_errors = item_ignore # ensure to accumulate these for array in ['warnings', 'deprecations']: if array in item and item[array]: if array not in res: res[array] = [] if not isinstance(item[array], list): item[array] = [item[array]] res[array] = res[array] + item[array] del item[array] if not res.get('failed', False): res['msg'] = 'All items completed' if res['skipped']: res['msg'] = 'All items skipped' else: res = dict(changed=False, skipped=True, skipped_reason='No items in the list', results=[]) else: display.debug("calling self._execute()") res = self._execute() display.debug("_execute() done") # make sure changed is set in the result, if it's not present if 'changed' not in res: res['changed'] = False def _clean_res(res, errors='surrogate_or_strict'): if isinstance(res, binary_type): return to_unsafe_text(res, errors=errors) elif isinstance(res, dict): for k in res: try: res[k] = _clean_res(res[k], errors=errors) except UnicodeError: if k == 'diff': # If this is a diff, substitute a replacement character if the value # is undecodable as utf8. (Fix #21804) display.warning("We were unable to decode all characters in the module return data." " Replaced some in an effort to return as much as possible") res[k] = _clean_res(res[k], errors='surrogate_then_replace') else: raise elif isinstance(res, list): for idx, item in enumerate(res): res[idx] = _clean_res(item, errors=errors) return res display.debug("dumping result to json") res = _clean_res(res) display.debug("done dumping result, returning") return res except AnsibleError as e: return dict(failed=True, msg=wrap_var(to_text(e, nonstring='simplerepr')), _ansible_no_log=self._play_context.no_log) except Exception as e: return dict(failed=True, msg='Unexpected failure during module execution.', exception=to_text(traceback.format_exc()), stdout='', _ansible_no_log=self._play_context.no_log) finally: try: self._connection.close() except AttributeError: pass except Exception as e: display.debug(u"error closing connection: %s" % to_text(e)) def _get_loop_items(self): ''' Loads a lookup plugin to handle the with_* portion of a task (if specified), and returns the items result. ''' # get search path for this task to pass to lookup plugins self._job_vars['ansible_search_path'] = self._task.get_search_path() # ensure basedir is always in (dwim already searches here but we need to display it) if self._loader.get_basedir() not in self._job_vars['ansible_search_path']: self._job_vars['ansible_search_path'].append(self._loader.get_basedir()) templar = Templar(loader=self._loader, variables=self._job_vars) items = None loop_cache = self._job_vars.get('_ansible_loop_cache') if loop_cache is not None: # _ansible_loop_cache may be set in `get_vars` when calculating `delegate_to` # to avoid reprocessing the loop items = loop_cache elif self._task.loop_with: if self._task.loop_with in self._shared_loader_obj.lookup_loader: fail = True if self._task.loop_with == 'first_found': # first_found loops are special. If the item is undefined then we want to fall through to the next value rather than failing. fail = False loop_terms = listify_lookup_plugin_terms(terms=self._task.loop, templar=templar, loader=self._loader, fail_on_undefined=fail, convert_bare=False) if not fail: loop_terms = [t for t in loop_terms if not templar.is_template(t)] # get lookup mylookup = self._shared_loader_obj.lookup_loader.get(self._task.loop_with, loader=self._loader, templar=templar) # give lookup task 'context' for subdir (mostly needed for first_found) for subdir in ['template', 'var', 'file']: # TODO: move this to constants? if subdir in self._task.action: break setattr(mylookup, '_subdir', subdir + 's') # run lookup items = wrap_var(mylookup.run(terms=loop_terms, variables=self._job_vars, wantlist=True)) else: raise AnsibleError("Unexpected failure in finding the lookup named '%s' in the available lookup plugins" % self._task.loop_with) elif self._task.loop is not None: items = templar.template(self._task.loop) if not isinstance(items, list): raise AnsibleError( "Invalid data passed to 'loop', it requires a list, got this instead: %s." " Hint: If you passed a list/dict of just one element," " try adding wantlist=True to your lookup invocation or use q/query instead of lookup." % items ) return items def _run_loop(self, items): ''' Runs the task with the loop items specified and collates the result into an array named 'results' which is inserted into the final result along with the item for which the loop ran. ''' results = [] # make copies of the job vars and task so we can add the item to # the variables and re-validate the task with the item variable # task_vars = self._job_vars.copy() task_vars = self._job_vars loop_var = 'item' index_var = None label = None loop_pause = 0 extended = False templar = Templar(loader=self._loader, variables=self._job_vars) # FIXME: move this to the object itself to allow post_validate to take care of templating (loop_control.post_validate) if self._task.loop_control: loop_var = templar.template(self._task.loop_control.loop_var) index_var = templar.template(self._task.loop_control.index_var) loop_pause = templar.template(self._task.loop_control.pause) extended = templar.template(self._task.loop_control.extended) # This may be 'None',so it is templated below after we ensure a value and an item is assigned label = self._task.loop_control.label # ensure we always have a label if label is None: label = '{{' + loop_var + '}}' if loop_var in task_vars: display.warning(u"%s: The loop variable '%s' is already in use. " u"You should set the `loop_var` value in the `loop_control` option for the task" u" to something else to avoid variable collisions and unexpected behavior." % (self._task, loop_var)) ran_once = False no_log = False items_len = len(items) for item_index, item in enumerate(items): task_vars['ansible_loop_var'] = loop_var task_vars[loop_var] = item if index_var: task_vars['ansible_index_var'] = index_var task_vars[index_var] = item_index if extended: task_vars['ansible_loop'] = { 'allitems': items, 'index': item_index + 1, 'index0': item_index, 'first': item_index == 0, 'last': item_index + 1 == items_len, 'length': items_len, 'revindex': items_len - item_index, 'revindex0': items_len - item_index - 1, } try: task_vars['ansible_loop']['nextitem'] = items[item_index + 1] except IndexError: pass if item_index - 1 >= 0: task_vars['ansible_loop']['previtem'] = items[item_index - 1] # Update template vars to reflect current loop iteration templar.available_variables = task_vars # pause between loop iterations if loop_pause and ran_once: try: time.sleep(float(loop_pause)) except ValueError as e: raise AnsibleError('Invalid pause value: %s, produced error: %s' % (loop_pause, to_native(e))) else: ran_once = True try: tmp_task = self._task.copy(exclude_parent=True, exclude_tasks=True) tmp_task._parent = self._task._parent tmp_play_context = self._play_context.copy() except AnsibleParserError as e: results.append(dict(failed=True, msg=to_text(e))) continue # now we swap the internal task and play context with their copies, # execute, and swap them back so we can do the next iteration cleanly (self._task, tmp_task) = (tmp_task, self._task) (self._play_context, tmp_play_context) = (tmp_play_context, self._play_context) res = self._execute(variables=task_vars) task_fields = self._task.dump_attrs() (self._task, tmp_task) = (tmp_task, self._task) (self._play_context, tmp_play_context) = (tmp_play_context, self._play_context) # update 'general no_log' based on specific no_log no_log = no_log or tmp_task.no_log # now update the result with the item info, and append the result # to the list of results res[loop_var] = item res['ansible_loop_var'] = loop_var if index_var: res[index_var] = item_index res['ansible_index_var'] = index_var if extended: res['ansible_loop'] = task_vars['ansible_loop'] res['_ansible_item_result'] = True res['_ansible_ignore_errors'] = task_fields.get('ignore_errors') # gets templated here unlike rest of loop_control fields, depends on loop_var above try: res['_ansible_item_label'] = templar.template(label, cache=False) except AnsibleUndefinedVariable as e: res.update({ 'failed': True, 'msg': 'Failed to template loop_control.label: %s' % to_text(e) }) tr = TaskResult( self._host.name, self._task._uuid, res, task_fields=task_fields, ) if tr.is_failed() or tr.is_unreachable(): self._final_q.send_callback('v2_runner_item_on_failed', tr) elif tr.is_skipped(): self._final_q.send_callback('v2_runner_item_on_skipped', tr) else: if getattr(self._task, 'diff', False): self._final_q.send_callback('v2_on_file_diff', tr) self._final_q.send_callback('v2_runner_item_on_ok', tr) results.append(res) del task_vars[loop_var] # clear 'connection related' plugin variables for next iteration if self._connection: clear_plugins = { 'connection': self._connection._load_name, 'shell': self._connection._shell._load_name } if self._connection.become: clear_plugins['become'] = self._connection.become._load_name for plugin_type, plugin_name in clear_plugins.items(): for var in C.config.get_plugin_vars(plugin_type, plugin_name): if var in task_vars and var not in self._job_vars: del task_vars[var] self._task.no_log = no_log return results def _execute(self, variables=None): ''' The primary workhorse of the executor system, this runs the task on the specified host (which may be the delegated_to host) and handles the retry/until and block rescue/always execution ''' if variables is None: variables = self._job_vars templar = Templar(loader=self._loader, variables=variables) context_validation_error = None try: # TODO: remove play_context as this does not take delegation into account, task itself should hold values # for connection/shell/become/terminal plugin options to finalize. # Kept for now for backwards compatibility and a few functions that are still exclusive to it. # apply the given task's information to the connection info, # which may override some fields already set by the play or # the options specified on the command line self._play_context = self._play_context.set_task_and_variable_override(task=self._task, variables=variables, templar=templar) # fields set from the play/task may be based on variables, so we have to # do the same kind of post validation step on it here before we use it. self._play_context.post_validate(templar=templar) # now that the play context is finalized, if the remote_addr is not set # default to using the host's address field as the remote address if not self._play_context.remote_addr: self._play_context.remote_addr = self._host.address # We also add "magic" variables back into the variables dict to make sure # a certain subset of variables exist. self._play_context.update_vars(variables) except AnsibleError as e: # save the error, which we'll raise later if we don't end up # skipping this task during the conditional evaluation step context_validation_error = e # Evaluate the conditional (if any) for this task, which we do before running # the final task post-validation. We do this before the post validation due to # the fact that the conditional may specify that the task be skipped due to a # variable not being present which would otherwise cause validation to fail try: if not self._task.evaluate_conditional(templar, variables): display.debug("when evaluation is False, skipping this task") return dict(changed=False, skipped=True, skip_reason='Conditional result was False', _ansible_no_log=self._play_context.no_log) except AnsibleError as e: # loop error takes precedence if self._loop_eval_error is not None: # Display the error from the conditional as well to prevent # losing information useful for debugging. display.v(to_text(e)) raise self._loop_eval_error # pylint: disable=raising-bad-type raise # Not skipping, if we had loop error raised earlier we need to raise it now to halt the execution of this task if self._loop_eval_error is not None: raise self._loop_eval_error # pylint: disable=raising-bad-type # if we ran into an error while setting up the PlayContext, raise it now, unless is known issue with delegation if context_validation_error is not None and not (self._task.delegate_to and isinstance(context_validation_error, AnsibleUndefinedVariable)): raise context_validation_error # pylint: disable=raising-bad-type # if this task is a TaskInclude, we just return now with a success code so the # main thread can expand the task list for the given host if self._task.action in C._ACTION_ALL_INCLUDE_TASKS: include_args = self._task.args.copy() include_file = include_args.pop('_raw_params', None) if not include_file: return dict(failed=True, msg="No include file was specified to the include") include_file = templar.template(include_file) return dict(include=include_file, include_args=include_args) # if this task is a IncludeRole, we just return now with a success code so the main thread can expand the task list for the given host elif self._task.action in C._ACTION_INCLUDE_ROLE: include_args = self._task.args.copy() return dict(include_args=include_args) # Now we do final validation on the task, which sets all fields to their final values. try: self._task.post_validate(templar=templar) except AnsibleError: raise except Exception: return dict(changed=False, failed=True, _ansible_no_log=self._play_context.no_log, exception=to_text(traceback.format_exc())) if '_variable_params' in self._task.args: variable_params = self._task.args.pop('_variable_params') if isinstance(variable_params, dict): if C.INJECT_FACTS_AS_VARS: display.warning("Using a variable for a task's 'args' is unsafe in some situations " "(see https://docs.ansible.com/ansible/devel/reference_appendices/faq.html#argsplat-unsafe)") variable_params.update(self._task.args) self._task.args = variable_params if self._task.delegate_to: # use vars from delegated host (which already include task vars) instead of original host cvars = variables.get('ansible_delegated_vars', {}).get(self._task.delegate_to, {}) orig_vars = templar.available_variables else: # just use normal host vars cvars = orig_vars = variables templar.available_variables = cvars # get the connection and the handler for this execution if (not self._connection or not getattr(self._connection, 'connected', False) or self._play_context.remote_addr != self._connection._play_context.remote_addr): self._connection = self._get_connection(cvars, templar) else: # if connection is reused, its _play_context is no longer valid and needs # to be replaced with the one templated above, in case other data changed self._connection._play_context = self._play_context plugin_vars = self._set_connection_options(cvars, templar) templar.available_variables = orig_vars # TODO: eventually remove this block as this should be a 'consequence' of 'forced_local' modules # special handling for python interpreter for network_os, default to ansible python unless overriden if 'ansible_network_os' in cvars and 'ansible_python_interpreter' not in cvars: # this also avoids 'python discovery' cvars['ansible_python_interpreter'] = sys.executable # get handler self._handler = self._get_action_handler(connection=self._connection, templar=templar) # Apply default params for action/module, if present self._task.args = get_action_args_with_defaults( self._task.resolved_action, self._task.args, self._task.module_defaults, templar, action_groups=self._task._parent._play._action_groups ) # And filter out any fields which were set to default(omit), and got the omit token value omit_token = variables.get('omit') if omit_token is not None: self._task.args = remove_omit(self._task.args, omit_token) # Read some values from the task, so that we can modify them if need be if self._task.until: retries = self._task.retries if retries is None: retries = 3 elif retries <= 0: retries = 1 else: retries += 1 else: retries = 1 delay = self._task.delay if delay < 0: delay = 1 # make a copy of the job vars here, in case we need to update them # with the registered variable value later on when testing conditions vars_copy = variables.copy() display.debug("starting attempt loop") result = None for attempt in range(1, retries + 1): display.debug("running the handler") try: if self._task.timeout: old_sig = signal.signal(signal.SIGALRM, task_timeout) signal.alarm(self._task.timeout) result = self._handler.run(task_vars=variables) except (AnsibleActionFail, AnsibleActionSkip) as e: return e.result except AnsibleConnectionFailure as e: return dict(unreachable=True, msg=to_text(e)) except TaskTimeoutError as e: msg = 'The %s action failed to execute in the expected time frame (%d) and was terminated' % (self._task.action, self._task.timeout) return dict(failed=True, msg=msg) finally: if self._task.timeout: signal.alarm(0) old_sig = signal.signal(signal.SIGALRM, old_sig) self._handler.cleanup() display.debug("handler run complete") # preserve no log result["_ansible_no_log"] = self._play_context.no_log if self._task.action not in C._ACTION_WITH_CLEAN_FACTS: result = wrap_var(result) # update the local copy of vars with the registered value, if specified, # or any facts which may have been generated by the module execution if self._task.register: if not isidentifier(self._task.register): raise AnsibleError("Invalid variable name in 'register' specified: '%s'" % self._task.register) vars_copy[self._task.register] = result if self._task.async_val > 0: if self._task.poll > 0 and not result.get('skipped') and not result.get('failed'): result = self._poll_async_result(result=result, templar=templar, task_vars=vars_copy) if result.get('failed'): self._final_q.send_callback( 'v2_runner_on_async_failed', TaskResult(self._host.name, self._task._uuid, result, task_fields=self._task.dump_attrs())) else: self._final_q.send_callback( 'v2_runner_on_async_ok', TaskResult(self._host.name, self._task._uuid, result, task_fields=self._task.dump_attrs())) # ensure no log is preserved result["_ansible_no_log"] = self._play_context.no_log # helper methods for use below in evaluating changed/failed_when def _evaluate_changed_when_result(result): if self._task.changed_when is not None and self._task.changed_when: cond = Conditional(loader=self._loader) cond.when = self._task.changed_when result['changed'] = cond.evaluate_conditional(templar, vars_copy) def _evaluate_failed_when_result(result): if self._task.failed_when: cond = Conditional(loader=self._loader) cond.when = self._task.failed_when failed_when_result = cond.evaluate_conditional(templar, vars_copy) result['failed_when_result'] = result['failed'] = failed_when_result else: failed_when_result = False return failed_when_result if 'ansible_facts' in result and self._task.action not in C._ACTION_DEBUG: if self._task.action in C._ACTION_WITH_CLEAN_FACTS: if self._task.delegate_to and self._task.delegate_facts: if '_ansible_delegated_vars' in vars_copy: vars_copy['_ansible_delegated_vars'].update(result['ansible_facts']) else: vars_copy['_ansible_delegated_vars'] = result['ansible_facts'] else: vars_copy.update(result['ansible_facts']) else: # TODO: cleaning of facts should eventually become part of taskresults instead of vars af = wrap_var(result['ansible_facts']) vars_copy['ansible_facts'] = combine_vars(vars_copy.get('ansible_facts', {}), namespace_facts(af)) if C.INJECT_FACTS_AS_VARS: vars_copy.update(clean_facts(af)) # set the failed property if it was missing. if 'failed' not in result: # rc is here for backwards compatibility and modules that use it instead of 'failed' if 'rc' in result and result['rc'] not in [0, "0"]: result['failed'] = True else: result['failed'] = False # Make attempts and retries available early to allow their use in changed/failed_when if self._task.until: result['attempts'] = attempt # set the changed property if it was missing. if 'changed' not in result: result['changed'] = False if self._task.action not in C._ACTION_WITH_CLEAN_FACTS: result = wrap_var(result) # re-update the local copy of vars with the registered value, if specified, # or any facts which may have been generated by the module execution # This gives changed/failed_when access to additional recently modified # attributes of result if self._task.register: vars_copy[self._task.register] = result # if we didn't skip this task, use the helpers to evaluate the changed/ # failed_when properties if 'skipped' not in result: try: condname = 'changed' _evaluate_changed_when_result(result) condname = 'failed' _evaluate_failed_when_result(result) except AnsibleError as e: result['failed'] = True result['%s_when_result' % condname] = to_text(e) if retries > 1: cond = Conditional(loader=self._loader) cond.when = self._task.until if cond.evaluate_conditional(templar, vars_copy): break else: # no conditional check, or it failed, so sleep for the specified time if attempt < retries: result['_ansible_retry'] = True result['retries'] = retries display.debug('Retrying task, attempt %d of %d' % (attempt, retries)) self._final_q.send_callback( 'v2_runner_retry', TaskResult( self._host.name, self._task._uuid, result, task_fields=self._task.dump_attrs() ) ) time.sleep(delay) self._handler = self._get_action_handler(connection=self._connection, templar=templar) else: if retries > 1: # we ran out of attempts, so mark the result as failed result['attempts'] = retries - 1 result['failed'] = True if self._task.action not in C._ACTION_WITH_CLEAN_FACTS: result = wrap_var(result) # do the final update of the local variables here, for both registered # values and any facts which may have been created if self._task.register: variables[self._task.register] = result if 'ansible_facts' in result and self._task.action not in C._ACTION_DEBUG: if self._task.action in C._ACTION_WITH_CLEAN_FACTS: if self._task.delegate_to and self._task.delegate_facts: if '_ansible_delegated_vars' in variables: variables['_ansible_delegated_vars'].update(result['ansible_facts']) else: variables['_ansible_delegated_vars'] = result['ansible_facts'] else: variables.update(result['ansible_facts']) else: # TODO: cleaning of facts should eventually become part of taskresults instead of vars af = wrap_var(result['ansible_facts']) variables['ansible_facts'] = combine_vars(variables.get('ansible_facts', {}), namespace_facts(af)) if C.INJECT_FACTS_AS_VARS: variables.update(clean_facts(af)) # save the notification target in the result, if it was specified, as # this task may be running in a loop in which case the notification # may be item-specific, ie. "notify: service {{item}}" if self._task.notify is not None: result['_ansible_notify'] = self._task.notify # add the delegated vars to the result, so we can reference them # on the results side without having to do any further templating # also now add conneciton vars results when delegating if self._task.delegate_to: result["_ansible_delegated_vars"] = {'ansible_delegated_host': self._task.delegate_to} for k in plugin_vars: result["_ansible_delegated_vars"][k] = cvars.get(k) # note: here for callbacks that rely on this info to display delegation for requireshed in ('ansible_host', 'ansible_port', 'ansible_user', 'ansible_connection'): if requireshed not in result["_ansible_delegated_vars"] and requireshed in cvars: result["_ansible_delegated_vars"][requireshed] = cvars.get(requireshed) # and return display.debug("attempt loop complete, returning result") return result def _poll_async_result(self, result, templar, task_vars=None): ''' Polls for the specified JID to be complete ''' if task_vars is None: task_vars = self._job_vars async_jid = result.get('ansible_job_id') if async_jid is None: return dict(failed=True, msg="No job id was returned by the async task") # Create a new pseudo-task to run the async_status module, and run # that (with a sleep for "poll" seconds between each retry) until the # async time limit is exceeded. async_task = Task.load(dict(action='async_status', args={'jid': async_jid}, environment=self._task.environment)) # FIXME: this is no longer the case, normal takes care of all, see if this can just be generalized # Because this is an async task, the action handler is async. However, # we need the 'normal' action handler for the status check, so get it # now via the action_loader async_handler = self._shared_loader_obj.action_loader.get( 'ansible.legacy.async_status', task=async_task, connection=self._connection, play_context=self._play_context, loader=self._loader, templar=templar, shared_loader_obj=self._shared_loader_obj, ) time_left = self._task.async_val while time_left > 0: time.sleep(self._task.poll) try: async_result = async_handler.run(task_vars=task_vars) # We do not bail out of the loop in cases where the failure # is associated with a parsing error. The async_runner can # have issues which result in a half-written/unparseable result # file on disk, which manifests to the user as a timeout happening # before it's time to timeout. if (int(async_result.get('finished', 0)) == 1 or ('failed' in async_result and async_result.get('_ansible_parsed', False)) or 'skipped' in async_result): break except Exception as e: # Connections can raise exceptions during polling (eg, network bounce, reboot); these should be non-fatal. # On an exception, call the connection's reset method if it has one # (eg, drop/recreate WinRM connection; some reused connections are in a broken state) display.vvvv("Exception during async poll, retrying... (%s)" % to_text(e)) display.debug("Async poll exception was:\n%s" % to_text(traceback.format_exc())) try: async_handler._connection.reset() except AttributeError: pass # Little hack to raise the exception if we've exhausted the timeout period time_left -= self._task.poll if time_left <= 0: raise else: time_left -= self._task.poll self._final_q.send_callback( 'v2_runner_on_async_poll', TaskResult( self._host.name, async_task._uuid, async_result, task_fields=async_task.dump_attrs(), ), ) if int(async_result.get('finished', 0)) != 1: if async_result.get('_ansible_parsed'): return dict(failed=True, msg="async task did not complete within the requested time - %ss" % self._task.async_val, async_result=async_result) else: return dict(failed=True, msg="async task produced unparseable results", async_result=async_result) else: # If the async task finished, automatically cleanup the temporary # status file left behind. cleanup_task = Task.load( { 'async_status': { 'jid': async_jid, 'mode': 'cleanup', }, 'environment': self._task.environment, } ) cleanup_handler = self._shared_loader_obj.action_loader.get( 'ansible.legacy.async_status', task=cleanup_task, connection=self._connection, play_context=self._play_context, loader=self._loader, templar=templar, shared_loader_obj=self._shared_loader_obj, ) cleanup_handler.run(task_vars=task_vars) cleanup_handler.cleanup(force=True) async_handler.cleanup(force=True) return async_result def _get_become(self, name): become = become_loader.get(name) if not become: raise AnsibleError("Invalid become method specified, could not find matching plugin: '%s'. " "Use `ansible-doc -t become -l` to list available plugins." % name) return become def _get_connection(self, cvars, templar): ''' Reads the connection property for the host, and returns the correct connection object from the list of connection plugins ''' # use magic var if it exists, if not, let task inheritance do it's thing. if cvars.get('ansible_connection') is not None: self._play_context.connection = templar.template(cvars['ansible_connection']) else: self._play_context.connection = self._task.connection # TODO: play context has logic to update the connection for 'smart' # (default value, will chose between ssh and paramiko) and 'persistent' # (really paramiko), eventually this should move to task object itself. connection_name = self._play_context.connection # load connection conn_type = connection_name connection, plugin_load_context = self._shared_loader_obj.connection_loader.get_with_context( conn_type, self._play_context, self._new_stdin, task_uuid=self._task._uuid, ansible_playbook_pid=to_text(os.getppid()) ) if not connection: raise AnsibleError("the connection plugin '%s' was not found" % conn_type) # load become plugin if needed if cvars.get('ansible_become') is not None: become = boolean(templar.template(cvars['ansible_become'])) else: become = self._task.become if become: if cvars.get('ansible_become_method'): become_plugin = self._get_become(templar.template(cvars['ansible_become_method'])) else: become_plugin = self._get_become(self._task.become_method) try: connection.set_become_plugin(become_plugin) except AttributeError: # Older connection plugin that does not support set_become_plugin pass if getattr(connection.become, 'require_tty', False) and not getattr(connection, 'has_tty', False): raise AnsibleError( "The '%s' connection does not provide a TTY which is required for the selected " "become plugin: %s." % (conn_type, become_plugin.name) ) # Backwards compat for connection plugins that don't support become plugins # Just do this unconditionally for now, we could move it inside of the # AttributeError above later self._play_context.set_become_plugin(become_plugin.name) # Also backwards compat call for those still using play_context self._play_context.set_attributes_from_plugin(connection) if any(((connection.supports_persistence and C.USE_PERSISTENT_CONNECTIONS), connection.force_persistence)): self._play_context.timeout = connection.get_option('persistent_command_timeout') display.vvvv('attempting to start connection', host=self._play_context.remote_addr) display.vvvv('using connection plugin %s' % connection.transport, host=self._play_context.remote_addr) options = self._get_persistent_connection_options(connection, cvars, templar) socket_path = start_connection(self._play_context, options, self._task._uuid) display.vvvv('local domain socket path is %s' % socket_path, host=self._play_context.remote_addr) setattr(connection, '_socket_path', socket_path) return connection def _get_persistent_connection_options(self, connection, final_vars, templar): option_vars = C.config.get_plugin_vars('connection', connection._load_name) plugin = connection._sub_plugin if plugin.get('type'): option_vars.extend(C.config.get_plugin_vars(plugin['type'], plugin['name'])) options = {} for k in option_vars: if k in final_vars: options[k] = templar.template(final_vars[k]) return options def _set_plugin_options(self, plugin_type, variables, templar, task_keys): try: plugin = getattr(self._connection, '_%s' % plugin_type) except AttributeError: # Some plugins are assigned to private attrs, ``become`` is not plugin = getattr(self._connection, plugin_type) option_vars = C.config.get_plugin_vars(plugin_type, plugin._load_name) options = {} for k in option_vars: if k in variables: options[k] = templar.template(variables[k]) # TODO move to task method? plugin.set_options(task_keys=task_keys, var_options=options) return option_vars def _set_connection_options(self, variables, templar): # keep list of variable names possibly consumed varnames = [] # grab list of usable vars for this plugin option_vars = C.config.get_plugin_vars('connection', self._connection._load_name) varnames.extend(option_vars) # create dict of 'templated vars' options = {'_extras': {}} for k in option_vars: if k in variables: options[k] = templar.template(variables[k]) # add extras if plugin supports them if getattr(self._connection, 'allow_extras', False): for k in variables: if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options: options['_extras'][k] = templar.template(variables[k]) task_keys = self._task.dump_attrs() # The task_keys 'timeout' attr is the task's timeout, not the connection timeout. # The connection timeout is threaded through the play_context for now. task_keys['timeout'] = self._play_context.timeout if self._play_context.password: # The connection password is threaded through the play_context for # now. This is something we ultimately want to avoid, but the first # step is to get connection plugins pulling the password through the # config system instead of directly accessing play_context. task_keys['password'] = self._play_context.password # Prevent task retries from overriding connection retries del(task_keys['retries']) # set options with 'templated vars' specific to this plugin and dependent ones self._connection.set_options(task_keys=task_keys, var_options=options) varnames.extend(self._set_plugin_options('shell', variables, templar, task_keys)) if self._connection.become is not None: if self._play_context.become_pass: # FIXME: eventually remove from task and play_context, here for backwards compat # keep out of play objects to avoid accidental disclosure, only become plugin should have # The become pass is already in the play_context if given on # the CLI (-K). Make the plugin aware of it in this case. task_keys['become_pass'] = self._play_context.become_pass varnames.extend(self._set_plugin_options('become', variables, templar, task_keys)) # FOR BACKWARDS COMPAT: for option in ('become_user', 'become_flags', 'become_exe', 'become_pass'): try: setattr(self._play_context, option, self._connection.become.get_option(option)) except KeyError: pass # some plugins don't support all base flags self._play_context.prompt = self._connection.become.prompt return varnames def _get_action_handler(self, connection, templar): ''' Returns the correct action plugin to handle the requestion task action ''' module_collection, separator, module_name = self._task.action.rpartition(".") module_prefix = module_name.split('_')[0] if module_collection: # For network modules, which look for one action plugin per platform, look for the # action plugin in the same collection as the module by prefixing the action plugin # with the same collection. network_action = "{0}.{1}".format(module_collection, module_prefix) else: network_action = module_prefix collections = self._task.collections # let action plugin override module, fallback to 'normal' action plugin otherwise if self._shared_loader_obj.action_loader.has_plugin(self._task.action, collection_list=collections): handler_name = self._task.action elif all((module_prefix in C.NETWORK_GROUP_MODULES, self._shared_loader_obj.action_loader.has_plugin(network_action, collection_list=collections))): handler_name = network_action display.vvvv("Using network group action {handler} for {action}".format(handler=handler_name, action=self._task.action), host=self._play_context.remote_addr) else: # use ansible.legacy.normal to allow (historic) local action_plugins/ override without collections search handler_name = 'ansible.legacy.normal' collections = None # until then, we don't want the task's collection list to be consulted; use the builtin handler = self._shared_loader_obj.action_loader.get( handler_name, task=self._task, connection=connection, play_context=self._play_context, loader=self._loader, templar=templar, shared_loader_obj=self._shared_loader_obj, collection_list=collections ) if not handler: raise AnsibleError("the handler '%s' was not found" % handler_name) return handler def start_connection(play_context, variables, task_uuid): ''' Starts the persistent connection ''' candidate_paths = [C.ANSIBLE_CONNECTION_PATH or os.path.dirname(sys.argv[0])] candidate_paths.extend(os.environ.get('PATH', '').split(os.pathsep)) for dirname in candidate_paths: ansible_connection = os.path.join(dirname, 'ansible-connection') if os.path.isfile(ansible_connection): display.vvvv("Found ansible-connection at path {0}".format(ansible_connection)) break else: raise AnsibleError("Unable to find location of 'ansible-connection'. " "Please set or check the value of ANSIBLE_CONNECTION_PATH") env = os.environ.copy() env.update({ # HACK; most of these paths may change during the controller's lifetime # (eg, due to late dynamic role includes, multi-playbook execution), without a way # to invalidate/update, ansible-connection won't always see the same plugins the controller # can. 'ANSIBLE_BECOME_PLUGINS': become_loader.print_paths(), 'ANSIBLE_CLICONF_PLUGINS': cliconf_loader.print_paths(), 'ANSIBLE_COLLECTIONS_PATH': to_native(os.pathsep.join(AnsibleCollectionConfig.collection_paths)), 'ANSIBLE_CONNECTION_PLUGINS': connection_loader.print_paths(), 'ANSIBLE_HTTPAPI_PLUGINS': httpapi_loader.print_paths(), 'ANSIBLE_NETCONF_PLUGINS': netconf_loader.print_paths(), 'ANSIBLE_TERMINAL_PLUGINS': terminal_loader.print_paths(), }) python = sys.executable master, slave = pty.openpty() p = subprocess.Popen( [python, ansible_connection, to_text(os.getppid()), to_text(task_uuid)], stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env ) os.close(slave) # We need to set the pty into noncanonical mode. This ensures that we # can receive lines longer than 4095 characters (plus newline) without # truncating. old = termios.tcgetattr(master) new = termios.tcgetattr(master) new[3] = new[3] & ~termios.ICANON try: termios.tcsetattr(master, termios.TCSANOW, new) write_to_file_descriptor(master, variables) write_to_file_descriptor(master, play_context.serialize()) (stdout, stderr) = p.communicate() finally: termios.tcsetattr(master, termios.TCSANOW, old) os.close(master) if p.returncode == 0: result = json.loads(to_text(stdout, errors='surrogate_then_replace')) else: try: result = json.loads(to_text(stderr, errors='surrogate_then_replace')) except getattr(json.decoder, 'JSONDecodeError', ValueError): # JSONDecodeError only available on Python 3.5+ result = {'error': to_text(stderr, errors='surrogate_then_replace')} if 'messages' in result: for level, message in result['messages']: if level == 'log': display.display(message, log_only=True) elif level in ('debug', 'v', 'vv', 'vvv', 'vvvv', 'vvvvv', 'vvvvvv'): getattr(display, level)(message, host=play_context.remote_addr) else: if hasattr(display, level): getattr(display, level)(message) else: display.vvvv(message, host=play_context.remote_addr) if 'error' in result: if play_context.verbosity > 2: if result.get('exception'): msg = "The full traceback is:\n" + result['exception'] display.display(msg, color=C.COLOR_ERROR) raise AnsibleError(result['error']) return result['socket_path']
closed
ansible/ansible
https://github.com/ansible/ansible
71,627
add_host module no longer returns changed=true, errors with changed_when:
##### SUMMARY The behavior in Ansible 2.9 was that `add_host:` always came back as changed. Maybe if the host already existed it should return ok, but I expect that when a host is actually created it is changed. Sometime between 2.9 and 2.11 it started to always return ok. Trying to make it changed with `changed_when: true` produces a traceback. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME add_host module ##### ANSIBLE VERSION ``` $ ansible --version [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible 2.11.0.dev0 config file = /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/awx_collection/bin/ansible python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] ``` ##### CONFIGURATION ``` $ ansible-config dump --only-changed [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ANSIBLE_NOCOWS(env: ANSIBLE_NOCOWS) = True DEPRECATION_WARNINGS(/Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg) = False ``` ##### OS / ENVIRONMENT Local action, Mac OS ##### STEPS TO REPRODUCE Run this playbook with >=1 host in the inventory ```yaml --- - name: add hosts to inventory hosts: all gather_facts: false connection: local vars: num_hosts: 10 tasks: - name: create inventory add_host: name: 'host-{{item}}' groups: dynamic ansible_connection: local host_id: '{{item}}' with_sequence: start=1 end={{num_hosts}} format=%d # changed_when: true notify: - single host handler handlers: - name: single host handler command: 'true' ``` ##### EXPECTED RESULTS I expect the tasks to be changed, and I expect the handler to be ran. look at output from Ansible 2.9 ``` $ ansible-playbook -i host1, dynamic_inventory.yml PLAY [add hosts to inventory] ******************************************************************************************************************************************************* TASK [create inventory] ************************************************************************************************************************************************************* changed: [host1] => (item=1) changed: [host1] => (item=2) changed: [host1] => (item=3) changed: [host1] => (item=4) changed: [host1] => (item=5) changed: [host1] => (item=6) changed: [host1] => (item=7) changed: [host1] => (item=8) changed: [host1] => (item=9) changed: [host1] => (item=10) RUNNING HANDLER [single host handler] *********************************************************************************************************************************************** [WARNING]: Platform darwin on host host1 is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. changed: [host1] PLAY RECAP ************************************************************************************************************************************************************************** host1 : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` ##### ACTUAL RESULTS with `# changed_when: true` left as a comment ``` $ ansible-playbook -i host1, dynamic_inventory.yml [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. PLAY [add hosts to inventory] ******************************************************************************************************************************************************* TASK [create inventory] ************************************************************************************************************************************************************* ok: [host1] => (item=1) ok: [host1] => (item=2) ok: [host1] => (item=3) ok: [host1] => (item=4) ok: [host1] => (item=5) ok: [host1] => (item=6) ok: [host1] => (item=7) ok: [host1] => (item=8) ok: [host1] => (item=9) ok: [host1] => (item=10) PLAY RECAP ************************************************************************************************************************************************************************** host1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` no handler is ran When uncommenting `changed_when: true` ``` $ ansible-playbook -i host1, dynamic_inventory.yml -vvv [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible-playbook 2.11.0.dev0 config file = /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/awx_collection/bin/ansible-playbook python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] Using /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg as config file Parsed host1, inventory source with host_list plugin PLAYBOOK: dynamic_inventory.yml ***************************************************************************************************************************************************** 1 plays in dynamic_inventory.yml PLAY [add hosts to inventory] ******************************************************************************************************************************************************* META: ran handlers TASK [create inventory] ************************************************************************************************************************************************************* task path: /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/dynamic_inventory.yml:9 creating host via 'add_host': hostname=host-1 changed: [host1] => (item=1) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-1", "host_vars": { "ansible_connection": "local", "host_id": "1" } }, "ansible_loop_var": "item", "changed": true, "item": "1" } creating host via 'add_host': hostname=host-2 changed: [host1] => (item=2) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-2", "host_vars": { "ansible_connection": "local", "host_id": "2" } }, "ansible_loop_var": "item", "changed": true, "item": "2" } creating host via 'add_host': hostname=host-3 changed: [host1] => (item=3) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-3", "host_vars": { "ansible_connection": "local", "host_id": "3" } }, "ansible_loop_var": "item", "changed": true, "item": "3" } creating host via 'add_host': hostname=host-4 changed: [host1] => (item=4) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-4", "host_vars": { "ansible_connection": "local", "host_id": "4" } }, "ansible_loop_var": "item", "changed": true, "item": "4" } creating host via 'add_host': hostname=host-5 changed: [host1] => (item=5) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-5", "host_vars": { "ansible_connection": "local", "host_id": "5" } }, "ansible_loop_var": "item", "changed": true, "item": "5" } creating host via 'add_host': hostname=host-6 changed: [host1] => (item=6) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-6", "host_vars": { "ansible_connection": "local", "host_id": "6" } }, "ansible_loop_var": "item", "changed": true, "item": "6" } creating host via 'add_host': hostname=host-7 changed: [host1] => (item=7) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-7", "host_vars": { "ansible_connection": "local", "host_id": "7" } }, "ansible_loop_var": "item", "changed": true, "item": "7" } creating host via 'add_host': hostname=host-8 changed: [host1] => (item=8) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-8", "host_vars": { "ansible_connection": "local", "host_id": "8" } }, "ansible_loop_var": "item", "changed": true, "item": "8" } creating host via 'add_host': hostname=host-9 changed: [host1] => (item=9) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-9", "host_vars": { "ansible_connection": "local", "host_id": "9" } }, "ansible_loop_var": "item", "changed": true, "item": "9" } creating host via 'add_host': hostname=host-10 changed: [host1] => (item=10) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-10", "host_vars": { "ansible_connection": "local", "host_id": "10" } }, "ansible_loop_var": "item", "changed": true, "item": "10" } NOTIFIED HANDLER single host handler for host1 ERROR! Unexpected Exception, this is probably a bug: local variable 'conditional' referenced before assignment the full traceback was: Traceback (most recent call last): File "/Users/alancoding/Documents/repos/ansible/lib/ansible/playbook/conditional.py", line 93, in evaluate_conditional for conditional in self.when: TypeError: 'bool' object is not iterable During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/alancoding/Documents/repos/ansible/bin/ansible-playbook", line 125, in <module> exit_code = cli.run() File "/Users/alancoding/Documents/repos/ansible/lib/ansible/cli/playbook.py", line 128, in run results = pbex.run() File "/Users/alancoding/Documents/repos/ansible/lib/ansible/executor/playbook_executor.py", line 169, in run result = self._tqm.run(play=play) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/executor/task_queue_manager.py", line 292, in run play_return = strategy.run(iterator, play_context) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/linear.py", line 329, in run results += self._wait_on_pending_results(iterator) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 804, in _wait_on_pending_results results = self._process_pending_results(iterator) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 129, in inner results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 661, in _process_pending_results post_process_whens(result_item, original_task, handler_templar) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 80, in post_process_whens result['changed'] = cond.evaluate_conditional(templar, templar.available_variables) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/playbook/conditional.py", line 112, in evaluate_conditional raise AnsibleError("The conditional check '%s' failed. The error was: %s" % (to_native(conditional), to_native(e)), obj=ds) UnboundLocalError: local variable 'conditional' referenced before assignment ```
https://github.com/ansible/ansible/issues/71627
https://github.com/ansible/ansible/pull/71719
2749d9fbf9242a59ed87f46ea057d84f4768a93e
394d216922d70709248a60f58da300f1e70f5894
2020-09-04T00:28:43Z
python
2022-02-04T11:35:23Z
lib/ansible/playbook/task.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type from ansible import constants as C from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleAssertionError from ansible.module_utils._text import to_native from ansible.module_utils.six import string_types from ansible.parsing.mod_args import ModuleArgsParser from ansible.parsing.yaml.objects import AnsibleBaseYAMLObject, AnsibleMapping from ansible.plugins.loader import lookup_loader from ansible.playbook.attribute import FieldAttribute from ansible.playbook.base import Base from ansible.playbook.block import Block from ansible.playbook.collectionsearch import CollectionSearch from ansible.playbook.conditional import Conditional from ansible.playbook.loop_control import LoopControl from ansible.playbook.role import Role from ansible.playbook.taggable import Taggable from ansible.utils.collection_loader import AnsibleCollectionConfig from ansible.utils.display import Display from ansible.utils.sentinel import Sentinel __all__ = ['Task'] display = Display() class Task(Base, Conditional, Taggable, CollectionSearch): """ A task is a language feature that represents a call to a module, with given arguments and other parameters. A handler is a subclass of a task. Usage: Task.load(datastructure) -> Task Task.something(...) """ # ================================================================================= # ATTRIBUTES # load_<attribute_name> and # validate_<attribute_name> # will be used if defined # might be possible to define others # NOTE: ONLY set defaults on task attributes that are not inheritable, # inheritance is only triggered if the 'current value' is None, # default can be set at play/top level object and inheritance will take it's course. _args = FieldAttribute(isa='dict', default=dict) _action = FieldAttribute(isa='string') _async_val = FieldAttribute(isa='int', default=0, alias='async') _changed_when = FieldAttribute(isa='list', default=list) _delay = FieldAttribute(isa='int', default=5) _delegate_to = FieldAttribute(isa='string') _delegate_facts = FieldAttribute(isa='bool') _failed_when = FieldAttribute(isa='list', default=list) _loop = FieldAttribute() _loop_control = FieldAttribute(isa='class', class_type=LoopControl, inherit=False) _notify = FieldAttribute(isa='list') _poll = FieldAttribute(isa='int', default=C.DEFAULT_POLL_INTERVAL) _register = FieldAttribute(isa='string', static=True) _retries = FieldAttribute(isa='int', default=3) _until = FieldAttribute(isa='list', default=list) # deprecated, used to be loop and loop_args but loop has been repurposed _loop_with = FieldAttribute(isa='string', private=True, inherit=False) def __init__(self, block=None, role=None, task_include=None): ''' constructors a task, without the Task.load classmethod, it will be pretty blank ''' self._role = role self._parent = None self.implicit = False self.resolved_action = None if task_include: self._parent = task_include else: self._parent = block super(Task, self).__init__() def get_name(self, include_role_fqcn=True): ''' return the name of the task ''' if self._role: role_name = self._role.get_name(include_role_fqcn=include_role_fqcn) if self._role and self.name: return "%s : %s" % (role_name, self.name) elif self.name: return self.name else: if self._role: return "%s : %s" % (role_name, self.action) else: return "%s" % (self.action,) def _merge_kv(self, ds): if ds is None: return "" elif isinstance(ds, string_types): return ds elif isinstance(ds, dict): buf = "" for (k, v) in ds.items(): if k.startswith('_'): continue buf = buf + "%s=%s " % (k, v) buf = buf.strip() return buf @staticmethod def load(data, block=None, role=None, task_include=None, variable_manager=None, loader=None): t = Task(block=block, role=role, task_include=task_include) return t.load_data(data, variable_manager=variable_manager, loader=loader) def __repr__(self): ''' returns a human readable representation of the task ''' if self.get_name() in C._ACTION_META: return "TASK: meta (%s)" % self.args['_raw_params'] else: return "TASK: %s" % self.get_name() def _preprocess_with_loop(self, ds, new_ds, k, v): ''' take a lookup plugin name and store it correctly ''' loop_name = k.replace("with_", "") if new_ds.get('loop') is not None or new_ds.get('loop_with') is not None: raise AnsibleError("duplicate loop in task: %s" % loop_name, obj=ds) if v is None: raise AnsibleError("you must specify a value when using %s" % k, obj=ds) new_ds['loop_with'] = loop_name new_ds['loop'] = v # display.deprecated("with_ type loops are being phased out, use the 'loop' keyword instead", # version="2.10", collection_name='ansible.builtin') def preprocess_data(self, ds): ''' tasks are especially complex arguments so need pre-processing. keep it short. ''' if not isinstance(ds, dict): raise AnsibleAssertionError('ds (%s) should be a dict but was a %s' % (ds, type(ds))) # the new, cleaned datastructure, which will have legacy # items reduced to a standard structure suitable for the # attributes of the task class new_ds = AnsibleMapping() if isinstance(ds, AnsibleBaseYAMLObject): new_ds.ansible_pos = ds.ansible_pos # since this affects the task action parsing, we have to resolve in preprocess instead of in typical validator default_collection = AnsibleCollectionConfig.default_collection collections_list = ds.get('collections') if collections_list is None: # use the parent value if our ds doesn't define it collections_list = self.collections else: # Validate this untemplated field early on to guarantee we are dealing with a list. # This is also done in CollectionSearch._load_collections() but this runs before that call. collections_list = self.get_validated_value('collections', self._collections, collections_list, None) if default_collection and not self._role: # FIXME: and not a collections role if collections_list: if default_collection not in collections_list: collections_list.insert(0, default_collection) else: collections_list = [default_collection] if collections_list and 'ansible.builtin' not in collections_list and 'ansible.legacy' not in collections_list: collections_list.append('ansible.legacy') if collections_list: ds['collections'] = collections_list # use the args parsing class to determine the action, args, # and the delegate_to value from the various possible forms # supported as legacy args_parser = ModuleArgsParser(task_ds=ds, collection_list=collections_list) try: (action, args, delegate_to) = args_parser.parse() except AnsibleParserError as e: # if the raises exception was created with obj=ds args, then it includes the detail # so we dont need to add it so we can just re raise. if e.obj: raise # But if it wasn't, we can add the yaml object now to get more detail raise AnsibleParserError(to_native(e), obj=ds, orig_exc=e) else: self.resolved_action = args_parser.resolved_action # the command/shell/script modules used to support the `cmd` arg, # which corresponds to what we now call _raw_params, so move that # value over to _raw_params (assuming it is empty) if action in C._ACTION_HAS_CMD: if 'cmd' in args: if args.get('_raw_params', '') != '': raise AnsibleError("The 'cmd' argument cannot be used when other raw parameters are specified." " Please put everything in one or the other place.", obj=ds) args['_raw_params'] = args.pop('cmd') new_ds['action'] = action new_ds['args'] = args new_ds['delegate_to'] = delegate_to # we handle any 'vars' specified in the ds here, as we may # be adding things to them below (special handling for includes). # When that deprecated feature is removed, this can be too. if 'vars' in ds: # _load_vars is defined in Base, and is used to load a dictionary # or list of dictionaries in a standard way new_ds['vars'] = self._load_vars(None, ds.get('vars')) else: new_ds['vars'] = dict() for (k, v) in ds.items(): if k in ('action', 'local_action', 'args', 'delegate_to') or k == action or k == 'shell': # we don't want to re-assign these values, which were determined by the ModuleArgsParser() above continue elif k.startswith('with_') and k.replace("with_", "") in lookup_loader: # transform into loop property self._preprocess_with_loop(ds, new_ds, k, v) elif C.INVALID_TASK_ATTRIBUTE_FAILED or k in self._valid_attrs: new_ds[k] = v else: display.warning("Ignoring invalid attribute: %s" % k) return super(Task, self).preprocess_data(new_ds) def _load_loop_control(self, attr, ds): if not isinstance(ds, dict): raise AnsibleParserError( "the `loop_control` value must be specified as a dictionary and cannot " "be a variable itself (though it can contain variables)", obj=ds, ) return LoopControl.load(data=ds, variable_manager=self._variable_manager, loader=self._loader) def _validate_attributes(self, ds): try: super(Task, self)._validate_attributes(ds) except AnsibleParserError as e: e.message += '\nThis error can be suppressed as a warning using the "invalid_task_attribute_failed" configuration' raise e def post_validate(self, templar): ''' Override of base class post_validate, to also do final validation on the block and task include (if any) to which this task belongs. ''' if self._parent: self._parent.post_validate(templar) if AnsibleCollectionConfig.default_collection: pass super(Task, self).post_validate(templar) def _post_validate_loop(self, attr, value, templar): ''' Override post validation for the loop field, which is templated specially in the TaskExecutor class when evaluating loops. ''' return value def _post_validate_environment(self, attr, value, templar): ''' Override post validation of vars on the play, as we don't want to template these too early. ''' env = {} if value is not None: def _parse_env_kv(k, v): try: env[k] = templar.template(v, convert_bare=False) except AnsibleUndefinedVariable as e: error = to_native(e) if self.action in C._ACTION_FACT_GATHERING and 'ansible_facts.env' in error or 'ansible_env' in error: # ignore as fact gathering is required for 'env' facts return raise if isinstance(value, list): for env_item in value: if isinstance(env_item, dict): for k in env_item: _parse_env_kv(k, env_item[k]) else: isdict = templar.template(env_item, convert_bare=False) if isinstance(isdict, dict): env.update(isdict) else: display.warning("could not parse environment value, skipping: %s" % value) elif isinstance(value, dict): # should not really happen env = dict() for env_item in value: _parse_env_kv(env_item, value[env_item]) else: # at this point it should be a simple string, also should not happen env = templar.template(value, convert_bare=False) return env def _post_validate_changed_when(self, attr, value, templar): ''' changed_when is evaluated after the execution of the task is complete, and should not be templated during the regular post_validate step. ''' return value def _post_validate_failed_when(self, attr, value, templar): ''' failed_when is evaluated after the execution of the task is complete, and should not be templated during the regular post_validate step. ''' return value def _post_validate_until(self, attr, value, templar): ''' until is evaluated after the execution of the task is complete, and should not be templated during the regular post_validate step. ''' return value def get_vars(self): all_vars = dict() if self._parent: all_vars.update(self._parent.get_vars()) all_vars.update(self.vars) if 'tags' in all_vars: del all_vars['tags'] if 'when' in all_vars: del all_vars['when'] return all_vars def get_include_params(self): all_vars = dict() if self._parent: all_vars.update(self._parent.get_include_params()) if self.action in C._ACTION_ALL_INCLUDES: all_vars.update(self.vars) return all_vars def copy(self, exclude_parent=False, exclude_tasks=False): new_me = super(Task, self).copy() new_me._parent = None if self._parent and not exclude_parent: new_me._parent = self._parent.copy(exclude_tasks=exclude_tasks) new_me._role = None if self._role: new_me._role = self._role new_me.implicit = self.implicit new_me.resolved_action = self.resolved_action return new_me def serialize(self): data = super(Task, self).serialize() if not self._squashed and not self._finalized: if self._parent: data['parent'] = self._parent.serialize() data['parent_type'] = self._parent.__class__.__name__ if self._role: data['role'] = self._role.serialize() data['implicit'] = self.implicit data['resolved_action'] = self.resolved_action return data def deserialize(self, data): # import is here to avoid import loops from ansible.playbook.task_include import TaskInclude from ansible.playbook.handler_task_include import HandlerTaskInclude parent_data = data.get('parent', None) if parent_data: parent_type = data.get('parent_type') if parent_type == 'Block': p = Block() elif parent_type == 'TaskInclude': p = TaskInclude() elif parent_type == 'HandlerTaskInclude': p = HandlerTaskInclude() p.deserialize(parent_data) self._parent = p del data['parent'] role_data = data.get('role') if role_data: r = Role() r.deserialize(role_data) self._role = r del data['role'] self.implicit = data.get('implicit', False) self.resolved_action = data.get('resolved_action') super(Task, self).deserialize(data) def set_loader(self, loader): ''' Sets the loader on this object and recursively on parent, child objects. This is used primarily after the Task has been serialized/deserialized, which does not preserve the loader. ''' self._loader = loader if self._parent: self._parent.set_loader(loader) def _get_parent_attribute(self, attr, extend=False, prepend=False): ''' Generic logic to get the attribute or parent attribute for a task value. ''' extend = self._valid_attrs[attr].extend prepend = self._valid_attrs[attr].prepend try: value = self._attributes[attr] # If parent is static, we can grab attrs from the parent # otherwise, defer to the grandparent if getattr(self._parent, 'statically_loaded', True): _parent = self._parent else: _parent = self._parent._parent if _parent and (value is Sentinel or extend): if getattr(_parent, 'statically_loaded', True): # vars are always inheritable, other attributes might not be for the parent but still should be for other ancestors if attr != 'vars' and hasattr(_parent, '_get_parent_attribute'): parent_value = _parent._get_parent_attribute(attr) else: parent_value = _parent._attributes.get(attr, Sentinel) if extend: value = self._extend_value(value, parent_value, prepend) else: value = parent_value except KeyError: pass return value def all_parents_static(self): if self._parent: return self._parent.all_parents_static() return True def get_first_parent_include(self): from ansible.playbook.task_include import TaskInclude if self._parent: if isinstance(self._parent, TaskInclude): return self._parent return self._parent.get_first_parent_include() return None
closed
ansible/ansible
https://github.com/ansible/ansible
71,627
add_host module no longer returns changed=true, errors with changed_when:
##### SUMMARY The behavior in Ansible 2.9 was that `add_host:` always came back as changed. Maybe if the host already existed it should return ok, but I expect that when a host is actually created it is changed. Sometime between 2.9 and 2.11 it started to always return ok. Trying to make it changed with `changed_when: true` produces a traceback. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME add_host module ##### ANSIBLE VERSION ``` $ ansible --version [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible 2.11.0.dev0 config file = /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/awx_collection/bin/ansible python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] ``` ##### CONFIGURATION ``` $ ansible-config dump --only-changed [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ANSIBLE_NOCOWS(env: ANSIBLE_NOCOWS) = True DEPRECATION_WARNINGS(/Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg) = False ``` ##### OS / ENVIRONMENT Local action, Mac OS ##### STEPS TO REPRODUCE Run this playbook with >=1 host in the inventory ```yaml --- - name: add hosts to inventory hosts: all gather_facts: false connection: local vars: num_hosts: 10 tasks: - name: create inventory add_host: name: 'host-{{item}}' groups: dynamic ansible_connection: local host_id: '{{item}}' with_sequence: start=1 end={{num_hosts}} format=%d # changed_when: true notify: - single host handler handlers: - name: single host handler command: 'true' ``` ##### EXPECTED RESULTS I expect the tasks to be changed, and I expect the handler to be ran. look at output from Ansible 2.9 ``` $ ansible-playbook -i host1, dynamic_inventory.yml PLAY [add hosts to inventory] ******************************************************************************************************************************************************* TASK [create inventory] ************************************************************************************************************************************************************* changed: [host1] => (item=1) changed: [host1] => (item=2) changed: [host1] => (item=3) changed: [host1] => (item=4) changed: [host1] => (item=5) changed: [host1] => (item=6) changed: [host1] => (item=7) changed: [host1] => (item=8) changed: [host1] => (item=9) changed: [host1] => (item=10) RUNNING HANDLER [single host handler] *********************************************************************************************************************************************** [WARNING]: Platform darwin on host host1 is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. changed: [host1] PLAY RECAP ************************************************************************************************************************************************************************** host1 : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` ##### ACTUAL RESULTS with `# changed_when: true` left as a comment ``` $ ansible-playbook -i host1, dynamic_inventory.yml [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. PLAY [add hosts to inventory] ******************************************************************************************************************************************************* TASK [create inventory] ************************************************************************************************************************************************************* ok: [host1] => (item=1) ok: [host1] => (item=2) ok: [host1] => (item=3) ok: [host1] => (item=4) ok: [host1] => (item=5) ok: [host1] => (item=6) ok: [host1] => (item=7) ok: [host1] => (item=8) ok: [host1] => (item=9) ok: [host1] => (item=10) PLAY RECAP ************************************************************************************************************************************************************************** host1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` no handler is ran When uncommenting `changed_when: true` ``` $ ansible-playbook -i host1, dynamic_inventory.yml -vvv [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible-playbook 2.11.0.dev0 config file = /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/awx_collection/bin/ansible-playbook python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] Using /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg as config file Parsed host1, inventory source with host_list plugin PLAYBOOK: dynamic_inventory.yml ***************************************************************************************************************************************************** 1 plays in dynamic_inventory.yml PLAY [add hosts to inventory] ******************************************************************************************************************************************************* META: ran handlers TASK [create inventory] ************************************************************************************************************************************************************* task path: /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/dynamic_inventory.yml:9 creating host via 'add_host': hostname=host-1 changed: [host1] => (item=1) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-1", "host_vars": { "ansible_connection": "local", "host_id": "1" } }, "ansible_loop_var": "item", "changed": true, "item": "1" } creating host via 'add_host': hostname=host-2 changed: [host1] => (item=2) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-2", "host_vars": { "ansible_connection": "local", "host_id": "2" } }, "ansible_loop_var": "item", "changed": true, "item": "2" } creating host via 'add_host': hostname=host-3 changed: [host1] => (item=3) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-3", "host_vars": { "ansible_connection": "local", "host_id": "3" } }, "ansible_loop_var": "item", "changed": true, "item": "3" } creating host via 'add_host': hostname=host-4 changed: [host1] => (item=4) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-4", "host_vars": { "ansible_connection": "local", "host_id": "4" } }, "ansible_loop_var": "item", "changed": true, "item": "4" } creating host via 'add_host': hostname=host-5 changed: [host1] => (item=5) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-5", "host_vars": { "ansible_connection": "local", "host_id": "5" } }, "ansible_loop_var": "item", "changed": true, "item": "5" } creating host via 'add_host': hostname=host-6 changed: [host1] => (item=6) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-6", "host_vars": { "ansible_connection": "local", "host_id": "6" } }, "ansible_loop_var": "item", "changed": true, "item": "6" } creating host via 'add_host': hostname=host-7 changed: [host1] => (item=7) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-7", "host_vars": { "ansible_connection": "local", "host_id": "7" } }, "ansible_loop_var": "item", "changed": true, "item": "7" } creating host via 'add_host': hostname=host-8 changed: [host1] => (item=8) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-8", "host_vars": { "ansible_connection": "local", "host_id": "8" } }, "ansible_loop_var": "item", "changed": true, "item": "8" } creating host via 'add_host': hostname=host-9 changed: [host1] => (item=9) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-9", "host_vars": { "ansible_connection": "local", "host_id": "9" } }, "ansible_loop_var": "item", "changed": true, "item": "9" } creating host via 'add_host': hostname=host-10 changed: [host1] => (item=10) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-10", "host_vars": { "ansible_connection": "local", "host_id": "10" } }, "ansible_loop_var": "item", "changed": true, "item": "10" } NOTIFIED HANDLER single host handler for host1 ERROR! Unexpected Exception, this is probably a bug: local variable 'conditional' referenced before assignment the full traceback was: Traceback (most recent call last): File "/Users/alancoding/Documents/repos/ansible/lib/ansible/playbook/conditional.py", line 93, in evaluate_conditional for conditional in self.when: TypeError: 'bool' object is not iterable During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/alancoding/Documents/repos/ansible/bin/ansible-playbook", line 125, in <module> exit_code = cli.run() File "/Users/alancoding/Documents/repos/ansible/lib/ansible/cli/playbook.py", line 128, in run results = pbex.run() File "/Users/alancoding/Documents/repos/ansible/lib/ansible/executor/playbook_executor.py", line 169, in run result = self._tqm.run(play=play) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/executor/task_queue_manager.py", line 292, in run play_return = strategy.run(iterator, play_context) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/linear.py", line 329, in run results += self._wait_on_pending_results(iterator) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 804, in _wait_on_pending_results results = self._process_pending_results(iterator) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 129, in inner results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 661, in _process_pending_results post_process_whens(result_item, original_task, handler_templar) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 80, in post_process_whens result['changed'] = cond.evaluate_conditional(templar, templar.available_variables) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/playbook/conditional.py", line 112, in evaluate_conditional raise AnsibleError("The conditional check '%s' failed. The error was: %s" % (to_native(conditional), to_native(e)), obj=ds) UnboundLocalError: local variable 'conditional' referenced before assignment ```
https://github.com/ansible/ansible/issues/71627
https://github.com/ansible/ansible/pull/71719
2749d9fbf9242a59ed87f46ea057d84f4768a93e
394d216922d70709248a60f58da300f1e70f5894
2020-09-04T00:28:43Z
python
2022-02-04T11:35:23Z
lib/ansible/plugins/strategy/__init__.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import cmd import functools import os import pprint import sys import threading import time from collections import deque from multiprocessing import Lock from queue import Queue from jinja2.exceptions import UndefinedError from ansible import constants as C from ansible import context from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleUndefinedVariable, AnsibleParserError from ansible.executor import action_write_locks from ansible.executor.play_iterator import IteratingStates, FailedStates from ansible.executor.process.worker import WorkerProcess from ansible.executor.task_result import TaskResult from ansible.executor.task_queue_manager import CallbackSend from ansible.module_utils.six import string_types from ansible.module_utils._text import to_text from ansible.module_utils.connection import Connection, ConnectionError from ansible.playbook.conditional import Conditional from ansible.playbook.handler import Handler from ansible.playbook.helpers import load_list_of_blocks from ansible.playbook.included_file import IncludedFile from ansible.playbook.task import Task from ansible.playbook.task_include import TaskInclude from ansible.plugins import loader as plugin_loader from ansible.template import Templar from ansible.utils.display import Display from ansible.utils.fqcn import add_internal_fqcns from ansible.utils.unsafe_proxy import wrap_var from ansible.utils.vars import combine_vars from ansible.vars.clean import strip_internal_keys, module_response_deepcopy display = Display() __all__ = ['StrategyBase'] # This list can be an exact match, or start of string bound # does not accept regex ALWAYS_DELEGATE_FACT_PREFIXES = frozenset(( 'discovered_interpreter_', )) class StrategySentinel: pass _sentinel = StrategySentinel() def post_process_whens(result, task, templar): cond = None if task.changed_when: cond = Conditional(loader=templar._loader) cond.when = task.changed_when result['changed'] = cond.evaluate_conditional(templar, templar.available_variables) if task.failed_when: if cond is None: cond = Conditional(loader=templar._loader) cond.when = task.failed_when failed_when_result = cond.evaluate_conditional(templar, templar.available_variables) result['failed_when_result'] = result['failed'] = failed_when_result def results_thread_main(strategy): while True: try: result = strategy._final_q.get() if isinstance(result, StrategySentinel): break elif isinstance(result, CallbackSend): for arg in result.args: if isinstance(arg, TaskResult): strategy.normalize_task_result(arg) break strategy._tqm.send_callback(result.method_name, *result.args, **result.kwargs) elif isinstance(result, TaskResult): strategy.normalize_task_result(result) with strategy._results_lock: # only handlers have the listen attr, so this must be a handler # we split up the results into two queues here to make sure # handler and regular result processing don't cross wires if 'listen' in result._task_fields: strategy._handler_results.append(result) else: strategy._results.append(result) else: display.warning('Received an invalid object (%s) in the result queue: %r' % (type(result), result)) except (IOError, EOFError): break except Queue.Empty: pass def debug_closure(func): """Closure to wrap ``StrategyBase._process_pending_results`` and invoke the task debugger""" @functools.wraps(func) def inner(self, iterator, one_pass=False, max_passes=None, do_handlers=False): status_to_stats_map = ( ('is_failed', 'failures'), ('is_unreachable', 'dark'), ('is_changed', 'changed'), ('is_skipped', 'skipped'), ) # We don't know the host yet, copy the previous states, for lookup after we process new results prev_host_states = iterator._host_states.copy() results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers) _processed_results = [] for result in results: task = result._task host = result._host _queued_task_args = self._queued_task_cache.pop((host.name, task._uuid), None) task_vars = _queued_task_args['task_vars'] play_context = _queued_task_args['play_context'] # Try to grab the previous host state, if it doesn't exist use get_host_state to generate an empty state try: prev_host_state = prev_host_states[host.name] except KeyError: prev_host_state = iterator.get_host_state(host) while result.needs_debugger(globally_enabled=self.debugger_active): next_action = NextAction() dbg = Debugger(task, host, task_vars, play_context, result, next_action) dbg.cmdloop() if next_action.result == NextAction.REDO: # rollback host state self._tqm.clear_failed_hosts() if task.run_once and iterator._play.strategy in add_internal_fqcns(('linear',)) and result.is_failed(): for host_name, state in prev_host_states.items(): if host_name == host.name: continue iterator.set_state_for_host(host_name, state) iterator._play._removed_hosts.remove(host_name) iterator.set_state_for_host(host.name, prev_host_state) for method, what in status_to_stats_map: if getattr(result, method)(): self._tqm._stats.decrement(what, host.name) self._tqm._stats.decrement('ok', host.name) # redo self._queue_task(host, task, task_vars, play_context) _processed_results.extend(debug_closure(func)(self, iterator, one_pass)) break elif next_action.result == NextAction.CONTINUE: _processed_results.append(result) break elif next_action.result == NextAction.EXIT: # Matches KeyboardInterrupt from bin/ansible sys.exit(99) else: _processed_results.append(result) return _processed_results return inner class StrategyBase: ''' This is the base class for strategy plugins, which contains some common code useful to all strategies like running handlers, cleanup actions, etc. ''' # by default, strategies should support throttling but we allow individual # strategies to disable this and either forego supporting it or managing # the throttling internally (as `free` does) ALLOW_BASE_THROTTLING = True def __init__(self, tqm): self._tqm = tqm self._inventory = tqm.get_inventory() self._workers = tqm._workers self._variable_manager = tqm.get_variable_manager() self._loader = tqm.get_loader() self._final_q = tqm._final_q self._step = context.CLIARGS.get('step', False) self._diff = context.CLIARGS.get('diff', False) # the task cache is a dictionary of tuples of (host.name, task._uuid) # used to find the original task object of in-flight tasks and to store # the task args/vars and play context info used to queue the task. self._queued_task_cache = {} # Backwards compat: self._display isn't really needed, just import the global display and use that. self._display = display # internal counters self._pending_results = 0 self._pending_handler_results = 0 self._cur_worker = 0 # this dictionary is used to keep track of hosts that have # outstanding tasks still in queue self._blocked_hosts = dict() # this dictionary is used to keep track of hosts that have # flushed handlers self._flushed_hosts = dict() self._results = deque() self._handler_results = deque() self._results_lock = threading.Condition(threading.Lock()) # create the result processing thread for reading results in the background self._results_thread = threading.Thread(target=results_thread_main, args=(self,)) self._results_thread.daemon = True self._results_thread.start() # holds the list of active (persistent) connections to be shutdown at # play completion self._active_connections = dict() # Caches for get_host calls, to avoid calling excessively # These values should be set at the top of the ``run`` method of each # strategy plugin. Use ``_set_hosts_cache`` to set these values self._hosts_cache = [] self._hosts_cache_all = [] self.debugger_active = C.ENABLE_TASK_DEBUGGER def _set_hosts_cache(self, play, refresh=True): """Responsible for setting _hosts_cache and _hosts_cache_all See comment in ``__init__`` for the purpose of these caches """ if not refresh and all((self._hosts_cache, self._hosts_cache_all)): return if not play.finalized and Templar(None).is_template(play.hosts): _pattern = 'all' else: _pattern = play.hosts or 'all' self._hosts_cache_all = [h.name for h in self._inventory.get_hosts(pattern=_pattern, ignore_restrictions=True)] self._hosts_cache = [h.name for h in self._inventory.get_hosts(play.hosts, order=play.order)] def cleanup(self): # close active persistent connections for sock in self._active_connections.values(): try: conn = Connection(sock) conn.reset() except ConnectionError as e: # most likely socket is already closed display.debug("got an error while closing persistent connection: %s" % e) self._final_q.put(_sentinel) self._results_thread.join() def run(self, iterator, play_context, result=0): # execute one more pass through the iterator without peeking, to # make sure that all of the hosts are advanced to their final task. # This should be safe, as everything should be IteratingStates.COMPLETE by # this point, though the strategy may not advance the hosts itself. for host in self._hosts_cache: if host not in self._tqm._unreachable_hosts: try: iterator.get_next_task_for_host(self._inventory.hosts[host]) except KeyError: iterator.get_next_task_for_host(self._inventory.get_host(host)) # save the failed/unreachable hosts, as the run_handlers() # method will clear that information during its execution failed_hosts = iterator.get_failed_hosts() unreachable_hosts = self._tqm._unreachable_hosts.keys() display.debug("running handlers") handler_result = self.run_handlers(iterator, play_context) if isinstance(handler_result, bool) and not handler_result: result |= self._tqm.RUN_ERROR elif not handler_result: result |= handler_result # now update with the hosts (if any) that failed or were # unreachable during the handler execution phase failed_hosts = set(failed_hosts).union(iterator.get_failed_hosts()) unreachable_hosts = set(unreachable_hosts).union(self._tqm._unreachable_hosts.keys()) # return the appropriate code, depending on the status hosts after the run if not isinstance(result, bool) and result != self._tqm.RUN_OK: return result elif len(unreachable_hosts) > 0: return self._tqm.RUN_UNREACHABLE_HOSTS elif len(failed_hosts) > 0: return self._tqm.RUN_FAILED_HOSTS else: return self._tqm.RUN_OK def get_hosts_remaining(self, play): self._set_hosts_cache(play, refresh=False) ignore = set(self._tqm._failed_hosts).union(self._tqm._unreachable_hosts) return [host for host in self._hosts_cache if host not in ignore] def get_failed_hosts(self, play): self._set_hosts_cache(play, refresh=False) return [host for host in self._hosts_cache if host in self._tqm._failed_hosts] def add_tqm_variables(self, vars, play): ''' Base class method to add extra variables/information to the list of task vars sent through the executor engine regarding the task queue manager state. ''' vars['ansible_current_hosts'] = self.get_hosts_remaining(play) vars['ansible_failed_hosts'] = self.get_failed_hosts(play) def _queue_task(self, host, task, task_vars, play_context): ''' handles queueing the task up to be sent to a worker ''' display.debug("entering _queue_task() for %s/%s" % (host.name, task.action)) # Add a write lock for tasks. # Maybe this should be added somewhere further up the call stack but # this is the earliest in the code where we have task (1) extracted # into its own variable and (2) there's only a single code path # leading to the module being run. This is called by three # functions: __init__.py::_do_handler_run(), linear.py::run(), and # free.py::run() so we'd have to add to all three to do it there. # The next common higher level is __init__.py::run() and that has # tasks inside of play_iterator so we'd have to extract them to do it # there. if task.action not in action_write_locks.action_write_locks: display.debug('Creating lock for %s' % task.action) action_write_locks.action_write_locks[task.action] = Lock() # create a templar and template things we need later for the queuing process templar = Templar(loader=self._loader, variables=task_vars) try: throttle = int(templar.template(task.throttle)) except Exception as e: raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e) # and then queue the new task try: # Determine the "rewind point" of the worker list. This means we start # iterating over the list of workers until the end of the list is found. # Normally, that is simply the length of the workers list (as determined # by the forks or serial setting), however a task/block/play may "throttle" # that limit down. rewind_point = len(self._workers) if throttle > 0 and self.ALLOW_BASE_THROTTLING: if task.run_once: display.debug("Ignoring 'throttle' as 'run_once' is also set for '%s'" % task.get_name()) else: if throttle <= rewind_point: display.debug("task: %s, throttle: %d" % (task.get_name(), throttle)) rewind_point = throttle queued = False starting_worker = self._cur_worker while True: if self._cur_worker >= rewind_point: self._cur_worker = 0 worker_prc = self._workers[self._cur_worker] if worker_prc is None or not worker_prc.is_alive(): self._queued_task_cache[(host.name, task._uuid)] = { 'host': host, 'task': task, 'task_vars': task_vars, 'play_context': play_context } worker_prc = WorkerProcess(self._final_q, task_vars, host, task, play_context, self._loader, self._variable_manager, plugin_loader) self._workers[self._cur_worker] = worker_prc self._tqm.send_callback('v2_runner_on_start', host, task) worker_prc.start() display.debug("worker is %d (out of %d available)" % (self._cur_worker + 1, len(self._workers))) queued = True self._cur_worker += 1 if self._cur_worker >= rewind_point: self._cur_worker = 0 if queued: break elif self._cur_worker == starting_worker: time.sleep(0.0001) if isinstance(task, Handler): self._pending_handler_results += 1 else: self._pending_results += 1 except (EOFError, IOError, AssertionError) as e: # most likely an abort display.debug("got an error while queuing: %s" % e) return display.debug("exiting _queue_task() for %s/%s" % (host.name, task.action)) def get_task_hosts(self, iterator, task_host, task): if task.run_once: host_list = [host for host in self._hosts_cache if host not in self._tqm._unreachable_hosts] else: host_list = [task_host.name] return host_list def get_delegated_hosts(self, result, task): host_name = result.get('_ansible_delegated_vars', {}).get('ansible_delegated_host', None) return [host_name or task.delegate_to] def _set_always_delegated_facts(self, result, task): """Sets host facts for ``delegate_to`` hosts for facts that should always be delegated This operation mutates ``result`` to remove the always delegated facts See ``ALWAYS_DELEGATE_FACT_PREFIXES`` """ if task.delegate_to is None: return facts = result['ansible_facts'] always_keys = set() _add = always_keys.add for fact_key in facts: for always_key in ALWAYS_DELEGATE_FACT_PREFIXES: if fact_key.startswith(always_key): _add(fact_key) if always_keys: _pop = facts.pop always_facts = { 'ansible_facts': dict((k, _pop(k)) for k in list(facts) if k in always_keys) } host_list = self.get_delegated_hosts(result, task) _set_host_facts = self._variable_manager.set_host_facts for target_host in host_list: _set_host_facts(target_host, always_facts) def normalize_task_result(self, task_result): """Normalize a TaskResult to reference actual Host and Task objects when only given the ``Host.name``, or the ``Task._uuid`` Only the ``Host.name`` and ``Task._uuid`` are commonly sent back from the ``TaskExecutor`` or ``WorkerProcess`` due to performance concerns Mutates the original object """ if isinstance(task_result._host, string_types): # If the value is a string, it is ``Host.name`` task_result._host = self._inventory.get_host(to_text(task_result._host)) if isinstance(task_result._task, string_types): # If the value is a string, it is ``Task._uuid`` queue_cache_entry = (task_result._host.name, task_result._task) try: found_task = self._queued_task_cache[queue_cache_entry]['task'] except KeyError: # This should only happen due to an implicit task created by the # TaskExecutor, restrict this behavior to the explicit use case # of an implicit async_status task if task_result._task_fields.get('action') != 'async_status': raise original_task = Task() else: original_task = found_task.copy(exclude_parent=True, exclude_tasks=True) original_task._parent = found_task._parent original_task.from_attrs(task_result._task_fields) task_result._task = original_task return task_result @debug_closure def _process_pending_results(self, iterator, one_pass=False, max_passes=None, do_handlers=False): ''' Reads results off the final queue and takes appropriate action based on the result (executing callbacks, updating state, etc.). ''' ret_results = [] handler_templar = Templar(self._loader) def search_handler_blocks_by_name(handler_name, handler_blocks): # iterate in reversed order since last handler loaded with the same name wins for handler_block in reversed(handler_blocks): for handler_task in handler_block.block: if handler_task.name: try: if not handler_task.cached_name: if handler_templar.is_template(handler_task.name): handler_templar.available_variables = self._variable_manager.get_vars(play=iterator._play, task=handler_task, _hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all) handler_task.name = handler_templar.template(handler_task.name) handler_task.cached_name = True # first we check with the full result of get_name(), which may # include the role name (if the handler is from a role). If that # is not found, we resort to the simple name field, which doesn't # have anything extra added to it. candidates = ( handler_task.name, handler_task.get_name(include_role_fqcn=False), handler_task.get_name(include_role_fqcn=True), ) if handler_name in candidates: return handler_task except (UndefinedError, AnsibleUndefinedVariable) as e: # We skip this handler due to the fact that it may be using # a variable in the name that was conditionally included via # set_fact or some other method, and we don't want to error # out unnecessarily if not handler_task.listen: display.warning( "Handler '%s' is unusable because it has no listen topics and " "the name could not be templated (host-specific variables are " "not supported in handler names). The error: %s" % (handler_task.name, to_text(e)) ) continue return None cur_pass = 0 while True: try: self._results_lock.acquire() if do_handlers: task_result = self._handler_results.popleft() else: task_result = self._results.popleft() except IndexError: break finally: self._results_lock.release() original_host = task_result._host original_task = task_result._task # all host status messages contain 2 entries: (msg, task_result) role_ran = False if task_result.is_failed(): role_ran = True ignore_errors = original_task.ignore_errors if not ignore_errors: display.debug("marking %s as failed" % original_host.name) if original_task.run_once: # if we're using run_once, we have to fail every host here for h in self._inventory.get_hosts(iterator._play.hosts): if h.name not in self._tqm._unreachable_hosts: iterator.mark_host_failed(h) else: iterator.mark_host_failed(original_host) # grab the current state and if we're iterating on the rescue portion # of a block then we save the failed task in a special var for use # within the rescue/always state, _ = iterator.get_next_task_for_host(original_host, peek=True) if iterator.is_failed(original_host) and state and state.run_state == IteratingStates.COMPLETE: self._tqm._failed_hosts[original_host.name] = True # Use of get_active_state() here helps detect proper state if, say, we are in a rescue # block from an included file (include_tasks). In a non-included rescue case, a rescue # that starts with a new 'block' will have an active state of IteratingStates.TASKS, so we also # check the current state block tree to see if any blocks are rescuing. if state and (iterator.get_active_state(state).run_state == IteratingStates.RESCUE or iterator.is_any_block_rescuing(state)): self._tqm._stats.increment('rescued', original_host.name) self._variable_manager.set_nonpersistent_facts( original_host.name, dict( ansible_failed_task=wrap_var(original_task.serialize()), ansible_failed_result=task_result._result, ), ) else: self._tqm._stats.increment('failures', original_host.name) else: self._tqm._stats.increment('ok', original_host.name) self._tqm._stats.increment('ignored', original_host.name) if 'changed' in task_result._result and task_result._result['changed']: self._tqm._stats.increment('changed', original_host.name) self._tqm.send_callback('v2_runner_on_failed', task_result, ignore_errors=ignore_errors) elif task_result.is_unreachable(): ignore_unreachable = original_task.ignore_unreachable if not ignore_unreachable: self._tqm._unreachable_hosts[original_host.name] = True iterator._play._removed_hosts.append(original_host.name) else: self._tqm._stats.increment('skipped', original_host.name) task_result._result['skip_reason'] = 'Host %s is unreachable' % original_host.name self._tqm._stats.increment('dark', original_host.name) self._tqm.send_callback('v2_runner_on_unreachable', task_result) elif task_result.is_skipped(): self._tqm._stats.increment('skipped', original_host.name) self._tqm.send_callback('v2_runner_on_skipped', task_result) else: role_ran = True if original_task.loop: # this task had a loop, and has more than one result, so # loop over all of them instead of a single result result_items = task_result._result.get('results', []) else: result_items = [task_result._result] for result_item in result_items: if '_ansible_notify' in result_item: if task_result.is_changed(): # The shared dictionary for notified handlers is a proxy, which # does not detect when sub-objects within the proxy are modified. # So, per the docs, we reassign the list so the proxy picks up and # notifies all other threads for handler_name in result_item['_ansible_notify']: found = False # Find the handler using the above helper. First we look up the # dependency chain of the current task (if it's from a role), otherwise # we just look through the list of handlers in the current play/all # roles and use the first one that matches the notify name target_handler = search_handler_blocks_by_name(handler_name, iterator._play.handlers) if target_handler is not None: found = True if target_handler.notify_host(original_host): self._tqm.send_callback('v2_playbook_on_notify', target_handler, original_host) for listening_handler_block in iterator._play.handlers: for listening_handler in listening_handler_block.block: listeners = getattr(listening_handler, 'listen', []) or [] if not listeners: continue listeners = listening_handler.get_validated_value( 'listen', listening_handler._valid_attrs['listen'], listeners, handler_templar ) if handler_name not in listeners: continue else: found = True if listening_handler.notify_host(original_host): self._tqm.send_callback('v2_playbook_on_notify', listening_handler, original_host) # and if none were found, then we raise an error if not found: msg = ("The requested handler '%s' was not found in either the main handlers list nor in the listening " "handlers list" % handler_name) if C.ERROR_ON_MISSING_HANDLER: raise AnsibleError(msg) else: display.warning(msg) if 'add_host' in result_item: # this task added a new host (add_host module) new_host_info = result_item.get('add_host', dict()) self._add_host(new_host_info, result_item) post_process_whens(result_item, original_task, handler_templar) elif 'add_group' in result_item: # this task added a new group (group_by module) self._add_group(original_host, result_item) post_process_whens(result_item, original_task, handler_templar) if 'ansible_facts' in result_item and original_task.action not in C._ACTION_DEBUG: # if delegated fact and we are delegating facts, we need to change target host for them if original_task.delegate_to is not None and original_task.delegate_facts: host_list = self.get_delegated_hosts(result_item, original_task) else: # Set facts that should always be on the delegated hosts self._set_always_delegated_facts(result_item, original_task) host_list = self.get_task_hosts(iterator, original_host, original_task) if original_task.action in C._ACTION_INCLUDE_VARS: for (var_name, var_value) in result_item['ansible_facts'].items(): # find the host we're actually referring too here, which may # be a host that is not really in inventory at all for target_host in host_list: self._variable_manager.set_host_variable(target_host, var_name, var_value) else: cacheable = result_item.pop('_ansible_facts_cacheable', False) for target_host in host_list: # so set_fact is a misnomer but 'cacheable = true' was meant to create an 'actual fact' # to avoid issues with precedence and confusion with set_fact normal operation, # we set BOTH fact and nonpersistent_facts (aka hostvar) # when fact is retrieved from cache in subsequent operations it will have the lower precedence, # but for playbook setting it the 'higher' precedence is kept is_set_fact = original_task.action in C._ACTION_SET_FACT if not is_set_fact or cacheable: self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy()) if is_set_fact: self._variable_manager.set_nonpersistent_facts(target_host, result_item['ansible_facts'].copy()) if 'ansible_stats' in result_item and 'data' in result_item['ansible_stats'] and result_item['ansible_stats']['data']: if 'per_host' not in result_item['ansible_stats'] or result_item['ansible_stats']['per_host']: host_list = self.get_task_hosts(iterator, original_host, original_task) else: host_list = [None] data = result_item['ansible_stats']['data'] aggregate = 'aggregate' in result_item['ansible_stats'] and result_item['ansible_stats']['aggregate'] for myhost in host_list: for k in data.keys(): if aggregate: self._tqm._stats.update_custom_stats(k, data[k], myhost) else: self._tqm._stats.set_custom_stats(k, data[k], myhost) if 'diff' in task_result._result: if self._diff or getattr(original_task, 'diff', False): self._tqm.send_callback('v2_on_file_diff', task_result) if not isinstance(original_task, TaskInclude): self._tqm._stats.increment('ok', original_host.name) if 'changed' in task_result._result and task_result._result['changed']: self._tqm._stats.increment('changed', original_host.name) # finally, send the ok for this task self._tqm.send_callback('v2_runner_on_ok', task_result) # register final results if original_task.register: host_list = self.get_task_hosts(iterator, original_host, original_task) clean_copy = strip_internal_keys(module_response_deepcopy(task_result._result)) if 'invocation' in clean_copy: del clean_copy['invocation'] for target_host in host_list: self._variable_manager.set_nonpersistent_facts(target_host, {original_task.register: clean_copy}) if do_handlers: self._pending_handler_results -= 1 else: self._pending_results -= 1 if original_host.name in self._blocked_hosts: del self._blocked_hosts[original_host.name] # If this is a role task, mark the parent role as being run (if # the task was ok or failed, but not skipped or unreachable) if original_task._role is not None and role_ran: # TODO: and original_task.action not in C._ACTION_INCLUDE_ROLE:? # lookup the role in the ROLE_CACHE to make sure we're dealing # with the correct object and mark it as executed for (entry, role_obj) in iterator._play.ROLE_CACHE[original_task._role.get_name()].items(): if role_obj._uuid == original_task._role._uuid: role_obj._had_task_run[original_host.name] = True ret_results.append(task_result) if one_pass or max_passes is not None and (cur_pass + 1) >= max_passes: break cur_pass += 1 return ret_results def _wait_on_handler_results(self, iterator, handler, notified_hosts): ''' Wait for the handler tasks to complete, using a short sleep between checks to ensure we don't spin lock ''' ret_results = [] handler_results = 0 display.debug("waiting for handler results...") while (self._pending_handler_results > 0 and handler_results < len(notified_hosts) and not self._tqm._terminated): if self._tqm.has_dead_workers(): raise AnsibleError("A worker was found in a dead state") results = self._process_pending_results(iterator, do_handlers=True) ret_results.extend(results) handler_results += len([ r._host for r in results if r._host in notified_hosts and r.task_name == handler.name]) if self._pending_handler_results > 0: time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL) display.debug("no more pending handlers, returning what we have") return ret_results def _wait_on_pending_results(self, iterator): ''' Wait for the shared counter to drop to zero, using a short sleep between checks to ensure we don't spin lock ''' ret_results = [] display.debug("waiting for pending results...") while self._pending_results > 0 and not self._tqm._terminated: if self._tqm.has_dead_workers(): raise AnsibleError("A worker was found in a dead state") results = self._process_pending_results(iterator) ret_results.extend(results) if self._pending_results > 0: time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL) display.debug("no more pending results, returning what we have") return ret_results def _add_host(self, host_info, result_item): ''' Helper function to add a new host to inventory based on a task result. ''' changed = False if host_info: host_name = host_info.get('host_name') # Check if host in inventory, add if not if host_name not in self._inventory.hosts: self._inventory.add_host(host_name, 'all') self._hosts_cache_all.append(host_name) changed = True new_host = self._inventory.hosts.get(host_name) # Set/update the vars for this host new_host_vars = new_host.get_vars() new_host_combined_vars = combine_vars(new_host_vars, host_info.get('host_vars', dict())) if new_host_vars != new_host_combined_vars: new_host.vars = new_host_combined_vars changed = True new_groups = host_info.get('groups', []) for group_name in new_groups: if group_name not in self._inventory.groups: group_name = self._inventory.add_group(group_name) changed = True new_group = self._inventory.groups[group_name] if new_group.add_host(self._inventory.hosts[host_name]): changed = True # reconcile inventory, ensures inventory rules are followed if changed: self._inventory.reconcile_inventory() result_item['changed'] = changed def _add_group(self, host, result_item): ''' Helper function to add a group (if it does not exist), and to assign the specified host to that group. ''' changed = False # the host here is from the executor side, which means it was a # serialized/cloned copy and we'll need to look up the proper # host object from the master inventory real_host = self._inventory.hosts.get(host.name) if real_host is None: if host.name == self._inventory.localhost.name: real_host = self._inventory.localhost else: raise AnsibleError('%s cannot be matched in inventory' % host.name) group_name = result_item.get('add_group') parent_group_names = result_item.get('parent_groups', []) if group_name not in self._inventory.groups: group_name = self._inventory.add_group(group_name) for name in parent_group_names: if name not in self._inventory.groups: # create the new group and add it to inventory self._inventory.add_group(name) changed = True group = self._inventory.groups[group_name] for parent_group_name in parent_group_names: parent_group = self._inventory.groups[parent_group_name] new = parent_group.add_child_group(group) if new and not changed: changed = True if real_host not in group.get_hosts(): changed = group.add_host(real_host) if group not in real_host.get_groups(): changed = real_host.add_group(group) if changed: self._inventory.reconcile_inventory() result_item['changed'] = changed def _copy_included_file(self, included_file): ''' A proven safe and performant way to create a copy of an included file ''' ti_copy = included_file._task.copy(exclude_parent=True) ti_copy._parent = included_file._task._parent temp_vars = ti_copy.vars.copy() temp_vars.update(included_file._vars) ti_copy.vars = temp_vars return ti_copy def _load_included_file(self, included_file, iterator, is_handler=False): ''' Loads an included YAML file of tasks, applying the optional set of variables. ''' display.debug("loading included file: %s" % included_file._filename) try: data = self._loader.load_from_file(included_file._filename) if data is None: return [] elif not isinstance(data, list): raise AnsibleError("included task files must contain a list of tasks") ti_copy = self._copy_included_file(included_file) block_list = load_list_of_blocks( data, play=iterator._play, parent_block=ti_copy.build_parent_block(), role=included_file._task._role, use_handlers=is_handler, loader=self._loader, variable_manager=self._variable_manager, ) # since we skip incrementing the stats when the task result is # first processed, we do so now for each host in the list for host in included_file._hosts: self._tqm._stats.increment('ok', host.name) except AnsibleParserError: raise except AnsibleError as e: if isinstance(e, AnsibleFileNotFound): reason = "Could not find or access '%s' on the Ansible Controller." % to_text(e.file_name) else: reason = to_text(e) for r in included_file._results: r._result['failed'] = True # mark all of the hosts including this file as failed, send callbacks, # and increment the stats for this host for host in included_file._hosts: tr = TaskResult(host=host, task=included_file._task, return_data=dict(failed=True, reason=reason)) iterator.mark_host_failed(host) self._tqm._failed_hosts[host.name] = True self._tqm._stats.increment('failures', host.name) self._tqm.send_callback('v2_runner_on_failed', tr) return [] # finally, send the callback and return the list of blocks loaded self._tqm.send_callback('v2_playbook_on_include', included_file) display.debug("done processing included file") return block_list def run_handlers(self, iterator, play_context): ''' Runs handlers on those hosts which have been notified. ''' result = self._tqm.RUN_OK for handler_block in iterator._play.handlers: # FIXME: handlers need to support the rescue/always portions of blocks too, # but this may take some work in the iterator and gets tricky when # we consider the ability of meta tasks to flush handlers for handler in handler_block.block: if handler.notified_hosts: result = self._do_handler_run(handler, handler.get_name(), iterator=iterator, play_context=play_context) if not result: break return result def _do_handler_run(self, handler, handler_name, iterator, play_context, notified_hosts=None): # FIXME: need to use iterator.get_failed_hosts() instead? # if not len(self.get_hosts_remaining(iterator._play)): # self._tqm.send_callback('v2_playbook_on_no_hosts_remaining') # result = False # break if notified_hosts is None: notified_hosts = handler.notified_hosts[:] # strategy plugins that filter hosts need access to the iterator to identify failed hosts failed_hosts = self._filter_notified_failed_hosts(iterator, notified_hosts) notified_hosts = self._filter_notified_hosts(notified_hosts) notified_hosts += failed_hosts if len(notified_hosts) > 0: self._tqm.send_callback('v2_playbook_on_handler_task_start', handler) bypass_host_loop = False try: action = plugin_loader.action_loader.get(handler.action, class_only=True, collection_list=handler.collections) if getattr(action, 'BYPASS_HOST_LOOP', False): bypass_host_loop = True except KeyError: # we don't care here, because the action may simply not have a # corresponding action plugin pass host_results = [] for host in notified_hosts: if not iterator.is_failed(host) or iterator._play.force_handlers: task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=handler, _hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all) self.add_tqm_variables(task_vars, play=iterator._play) templar = Templar(loader=self._loader, variables=task_vars) if not handler.cached_name: handler.name = templar.template(handler.name) handler.cached_name = True self._queue_task(host, handler, task_vars, play_context) if templar.template(handler.run_once) or bypass_host_loop: break # collect the results from the handler run host_results = self._wait_on_handler_results(iterator, handler, notified_hosts) included_files = IncludedFile.process_include_results( host_results, iterator=iterator, loader=self._loader, variable_manager=self._variable_manager ) result = True if len(included_files) > 0: for included_file in included_files: try: new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=True) # for every task in each block brought in by the include, add the list # of hosts which included the file to the notified_handlers dict for block in new_blocks: iterator._play.handlers.append(block) for task in block.block: task_name = task.get_name() display.debug("adding task '%s' included in handler '%s'" % (task_name, handler_name)) task.notified_hosts = included_file._hosts[:] result = self._do_handler_run( handler=task, handler_name=task_name, iterator=iterator, play_context=play_context, notified_hosts=included_file._hosts[:], ) if not result: break except AnsibleParserError: raise except AnsibleError as e: for host in included_file._hosts: iterator.mark_host_failed(host) self._tqm._failed_hosts[host.name] = True display.warning(to_text(e)) continue # remove hosts from notification list handler.notified_hosts = [ h for h in handler.notified_hosts if h not in notified_hosts] display.debug("done running handlers, result is: %s" % result) return result def _filter_notified_failed_hosts(self, iterator, notified_hosts): return [] def _filter_notified_hosts(self, notified_hosts): ''' Filter notified hosts accordingly to strategy ''' # As main strategy is linear, we do not filter hosts # We return a copy to avoid race conditions return notified_hosts[:] def _take_step(self, task, host=None): ret = False msg = u'Perform task: %s ' % task if host: msg += u'on %s ' % host msg += u'(N)o/(y)es/(c)ontinue: ' resp = display.prompt(msg) if resp.lower() in ['y', 'yes']: display.debug("User ran task") ret = True elif resp.lower() in ['c', 'continue']: display.debug("User ran task and canceled step mode") self._step = False ret = True else: display.debug("User skipped task") display.banner(msg) return ret def _cond_not_supported_warn(self, task_name): display.warning("%s task does not support when conditional" % task_name) def _execute_meta(self, task, play_context, iterator, target_host): # meta tasks store their args in the _raw_params field of args, # since they do not use k=v pairs, so get that meta_action = task.args.get('_raw_params') def _evaluate_conditional(h): all_vars = self._variable_manager.get_vars(play=iterator._play, host=h, task=task, _hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all) templar = Templar(loader=self._loader, variables=all_vars) return task.evaluate_conditional(templar, all_vars) skipped = False msg = '' skip_reason = '%s conditional evaluated to False' % meta_action self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False) # These don't support "when" conditionals if meta_action in ('noop', 'flush_handlers', 'refresh_inventory', 'reset_connection') and task.when: self._cond_not_supported_warn(meta_action) if meta_action == 'noop': msg = "noop" elif meta_action == 'flush_handlers': self._flushed_hosts[target_host] = True self.run_handlers(iterator, play_context) self._flushed_hosts[target_host] = False msg = "ran handlers" elif meta_action == 'refresh_inventory': self._inventory.refresh_inventory() self._set_hosts_cache(iterator._play) msg = "inventory successfully refreshed" elif meta_action == 'clear_facts': if _evaluate_conditional(target_host): for host in self._inventory.get_hosts(iterator._play.hosts): hostname = host.get_name() self._variable_manager.clear_facts(hostname) msg = "facts cleared" else: skipped = True skip_reason += ', not clearing facts and fact cache for %s' % target_host.name elif meta_action == 'clear_host_errors': if _evaluate_conditional(target_host): for host in self._inventory.get_hosts(iterator._play.hosts): self._tqm._failed_hosts.pop(host.name, False) self._tqm._unreachable_hosts.pop(host.name, False) iterator.set_fail_state_for_host(host.name, FailedStates.NONE) msg = "cleared host errors" else: skipped = True skip_reason += ', not clearing host error state for %s' % target_host.name elif meta_action == 'end_batch': if _evaluate_conditional(target_host): for host in self._inventory.get_hosts(iterator._play.hosts): if host.name not in self._tqm._unreachable_hosts: iterator.set_run_state_for_host(host.name, IteratingStates.COMPLETE) msg = "ending batch" else: skipped = True skip_reason += ', continuing current batch' elif meta_action == 'end_play': if _evaluate_conditional(target_host): for host in self._inventory.get_hosts(iterator._play.hosts): if host.name not in self._tqm._unreachable_hosts: iterator.set_run_state_for_host(host.name, IteratingStates.COMPLETE) # end_play is used in PlaybookExecutor/TQM to indicate that # the whole play is supposed to be ended as opposed to just a batch iterator.end_play = True msg = "ending play" else: skipped = True skip_reason += ', continuing play' elif meta_action == 'end_host': if _evaluate_conditional(target_host): iterator.set_run_state_for_host(target_host.name, IteratingStates.COMPLETE) iterator._play._removed_hosts.append(target_host.name) msg = "ending play for %s" % target_host.name else: skipped = True skip_reason += ", continuing execution for %s" % target_host.name # TODO: Nix msg here? Left for historical reasons, but skip_reason exists now. msg = "end_host conditional evaluated to false, continuing execution for %s" % target_host.name elif meta_action == 'role_complete': # Allow users to use this in a play as reported in https://github.com/ansible/ansible/issues/22286? # How would this work with allow_duplicates?? if task.implicit: if target_host.name in task._role._had_task_run: task._role._completed[target_host.name] = True msg = 'role_complete for %s' % target_host.name elif meta_action == 'reset_connection': all_vars = self._variable_manager.get_vars(play=iterator._play, host=target_host, task=task, _hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all) templar = Templar(loader=self._loader, variables=all_vars) # apply the given task's information to the connection info, # which may override some fields already set by the play or # the options specified on the command line play_context = play_context.set_task_and_variable_override(task=task, variables=all_vars, templar=templar) # fields set from the play/task may be based on variables, so we have to # do the same kind of post validation step on it here before we use it. play_context.post_validate(templar=templar) # now that the play context is finalized, if the remote_addr is not set # default to using the host's address field as the remote address if not play_context.remote_addr: play_context.remote_addr = target_host.address # We also add "magic" variables back into the variables dict to make sure # a certain subset of variables exist. play_context.update_vars(all_vars) if target_host in self._active_connections: connection = Connection(self._active_connections[target_host]) del self._active_connections[target_host] else: connection = plugin_loader.connection_loader.get(play_context.connection, play_context, os.devnull) connection.set_options(task_keys=task.dump_attrs(), var_options=all_vars) play_context.set_attributes_from_plugin(connection) if connection: try: connection.reset() msg = 'reset connection' except ConnectionError as e: # most likely socket is already closed display.debug("got an error while closing persistent connection: %s" % e) else: msg = 'no connection, nothing to reset' else: raise AnsibleError("invalid meta action requested: %s" % meta_action, obj=task._ds) result = {'msg': msg} if skipped: result['skipped'] = True result['skip_reason'] = skip_reason else: result['changed'] = False display.vv("META: %s" % msg) res = TaskResult(target_host, task, result) if skipped: self._tqm.send_callback('v2_runner_on_skipped', res) return [res] def get_hosts_left(self, iterator): ''' returns list of available hosts for this iterator by filtering out unreachables ''' hosts_left = [] for host in self._hosts_cache: if host not in self._tqm._unreachable_hosts: try: hosts_left.append(self._inventory.hosts[host]) except KeyError: hosts_left.append(self._inventory.get_host(host)) return hosts_left def update_active_connections(self, results): ''' updates the current active persistent connections ''' for r in results: if 'args' in r._task_fields: socket_path = r._task_fields['args'].get('_ansible_socket') if socket_path: if r._host not in self._active_connections: self._active_connections[r._host] = socket_path class NextAction(object): """ The next action after an interpreter's exit. """ REDO = 1 CONTINUE = 2 EXIT = 3 def __init__(self, result=EXIT): self.result = result class Debugger(cmd.Cmd): prompt_continuous = '> ' # multiple lines def __init__(self, task, host, task_vars, play_context, result, next_action): # cmd.Cmd is old-style class cmd.Cmd.__init__(self) self.prompt = '[%s] %s (debug)> ' % (host, task) self.intro = None self.scope = {} self.scope['task'] = task self.scope['task_vars'] = task_vars self.scope['host'] = host self.scope['play_context'] = play_context self.scope['result'] = result self.next_action = next_action def cmdloop(self): try: cmd.Cmd.cmdloop(self) except KeyboardInterrupt: pass do_h = cmd.Cmd.do_help def do_EOF(self, args): """Quit""" return self.do_quit(args) def do_quit(self, args): """Quit""" display.display('User interrupted execution') self.next_action.result = NextAction.EXIT return True do_q = do_quit def do_continue(self, args): """Continue to next result""" self.next_action.result = NextAction.CONTINUE return True do_c = do_continue def do_redo(self, args): """Schedule task for re-execution. The re-execution may not be the next result""" self.next_action.result = NextAction.REDO return True do_r = do_redo def do_update_task(self, args): """Recreate the task from ``task._ds``, and template with updated ``task_vars``""" templar = Templar(None, variables=self.scope['task_vars']) task = self.scope['task'] task = task.load_data(task._ds) task.post_validate(templar) self.scope['task'] = task do_u = do_update_task def evaluate(self, args): try: return eval(args, globals(), self.scope) except Exception: t, v = sys.exc_info()[:2] if isinstance(t, str): exc_type_name = t else: exc_type_name = t.__name__ display.display('***%s:%s' % (exc_type_name, repr(v))) raise def do_pprint(self, args): """Pretty Print""" try: result = self.evaluate(args) display.display(pprint.pformat(result)) except Exception: pass do_p = do_pprint def execute(self, args): try: code = compile(args + '\n', '<stdin>', 'single') exec(code, globals(), self.scope) except Exception: t, v = sys.exc_info()[:2] if isinstance(t, str): exc_type_name = t else: exc_type_name = t.__name__ display.display('***%s:%s' % (exc_type_name, repr(v))) raise def default(self, line): try: self.execute(line) except Exception: pass
closed
ansible/ansible
https://github.com/ansible/ansible
71,627
add_host module no longer returns changed=true, errors with changed_when:
##### SUMMARY The behavior in Ansible 2.9 was that `add_host:` always came back as changed. Maybe if the host already existed it should return ok, but I expect that when a host is actually created it is changed. Sometime between 2.9 and 2.11 it started to always return ok. Trying to make it changed with `changed_when: true` produces a traceback. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME add_host module ##### ANSIBLE VERSION ``` $ ansible --version [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible 2.11.0.dev0 config file = /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/awx_collection/bin/ansible python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] ``` ##### CONFIGURATION ``` $ ansible-config dump --only-changed [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ANSIBLE_NOCOWS(env: ANSIBLE_NOCOWS) = True DEPRECATION_WARNINGS(/Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg) = False ``` ##### OS / ENVIRONMENT Local action, Mac OS ##### STEPS TO REPRODUCE Run this playbook with >=1 host in the inventory ```yaml --- - name: add hosts to inventory hosts: all gather_facts: false connection: local vars: num_hosts: 10 tasks: - name: create inventory add_host: name: 'host-{{item}}' groups: dynamic ansible_connection: local host_id: '{{item}}' with_sequence: start=1 end={{num_hosts}} format=%d # changed_when: true notify: - single host handler handlers: - name: single host handler command: 'true' ``` ##### EXPECTED RESULTS I expect the tasks to be changed, and I expect the handler to be ran. look at output from Ansible 2.9 ``` $ ansible-playbook -i host1, dynamic_inventory.yml PLAY [add hosts to inventory] ******************************************************************************************************************************************************* TASK [create inventory] ************************************************************************************************************************************************************* changed: [host1] => (item=1) changed: [host1] => (item=2) changed: [host1] => (item=3) changed: [host1] => (item=4) changed: [host1] => (item=5) changed: [host1] => (item=6) changed: [host1] => (item=7) changed: [host1] => (item=8) changed: [host1] => (item=9) changed: [host1] => (item=10) RUNNING HANDLER [single host handler] *********************************************************************************************************************************************** [WARNING]: Platform darwin on host host1 is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. changed: [host1] PLAY RECAP ************************************************************************************************************************************************************************** host1 : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` ##### ACTUAL RESULTS with `# changed_when: true` left as a comment ``` $ ansible-playbook -i host1, dynamic_inventory.yml [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. PLAY [add hosts to inventory] ******************************************************************************************************************************************************* TASK [create inventory] ************************************************************************************************************************************************************* ok: [host1] => (item=1) ok: [host1] => (item=2) ok: [host1] => (item=3) ok: [host1] => (item=4) ok: [host1] => (item=5) ok: [host1] => (item=6) ok: [host1] => (item=7) ok: [host1] => (item=8) ok: [host1] => (item=9) ok: [host1] => (item=10) PLAY RECAP ************************************************************************************************************************************************************************** host1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` no handler is ran When uncommenting `changed_when: true` ``` $ ansible-playbook -i host1, dynamic_inventory.yml -vvv [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible-playbook 2.11.0.dev0 config file = /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/awx_collection/bin/ansible-playbook python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] Using /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg as config file Parsed host1, inventory source with host_list plugin PLAYBOOK: dynamic_inventory.yml ***************************************************************************************************************************************************** 1 plays in dynamic_inventory.yml PLAY [add hosts to inventory] ******************************************************************************************************************************************************* META: ran handlers TASK [create inventory] ************************************************************************************************************************************************************* task path: /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/dynamic_inventory.yml:9 creating host via 'add_host': hostname=host-1 changed: [host1] => (item=1) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-1", "host_vars": { "ansible_connection": "local", "host_id": "1" } }, "ansible_loop_var": "item", "changed": true, "item": "1" } creating host via 'add_host': hostname=host-2 changed: [host1] => (item=2) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-2", "host_vars": { "ansible_connection": "local", "host_id": "2" } }, "ansible_loop_var": "item", "changed": true, "item": "2" } creating host via 'add_host': hostname=host-3 changed: [host1] => (item=3) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-3", "host_vars": { "ansible_connection": "local", "host_id": "3" } }, "ansible_loop_var": "item", "changed": true, "item": "3" } creating host via 'add_host': hostname=host-4 changed: [host1] => (item=4) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-4", "host_vars": { "ansible_connection": "local", "host_id": "4" } }, "ansible_loop_var": "item", "changed": true, "item": "4" } creating host via 'add_host': hostname=host-5 changed: [host1] => (item=5) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-5", "host_vars": { "ansible_connection": "local", "host_id": "5" } }, "ansible_loop_var": "item", "changed": true, "item": "5" } creating host via 'add_host': hostname=host-6 changed: [host1] => (item=6) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-6", "host_vars": { "ansible_connection": "local", "host_id": "6" } }, "ansible_loop_var": "item", "changed": true, "item": "6" } creating host via 'add_host': hostname=host-7 changed: [host1] => (item=7) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-7", "host_vars": { "ansible_connection": "local", "host_id": "7" } }, "ansible_loop_var": "item", "changed": true, "item": "7" } creating host via 'add_host': hostname=host-8 changed: [host1] => (item=8) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-8", "host_vars": { "ansible_connection": "local", "host_id": "8" } }, "ansible_loop_var": "item", "changed": true, "item": "8" } creating host via 'add_host': hostname=host-9 changed: [host1] => (item=9) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-9", "host_vars": { "ansible_connection": "local", "host_id": "9" } }, "ansible_loop_var": "item", "changed": true, "item": "9" } creating host via 'add_host': hostname=host-10 changed: [host1] => (item=10) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-10", "host_vars": { "ansible_connection": "local", "host_id": "10" } }, "ansible_loop_var": "item", "changed": true, "item": "10" } NOTIFIED HANDLER single host handler for host1 ERROR! Unexpected Exception, this is probably a bug: local variable 'conditional' referenced before assignment the full traceback was: Traceback (most recent call last): File "/Users/alancoding/Documents/repos/ansible/lib/ansible/playbook/conditional.py", line 93, in evaluate_conditional for conditional in self.when: TypeError: 'bool' object is not iterable During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/alancoding/Documents/repos/ansible/bin/ansible-playbook", line 125, in <module> exit_code = cli.run() File "/Users/alancoding/Documents/repos/ansible/lib/ansible/cli/playbook.py", line 128, in run results = pbex.run() File "/Users/alancoding/Documents/repos/ansible/lib/ansible/executor/playbook_executor.py", line 169, in run result = self._tqm.run(play=play) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/executor/task_queue_manager.py", line 292, in run play_return = strategy.run(iterator, play_context) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/linear.py", line 329, in run results += self._wait_on_pending_results(iterator) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 804, in _wait_on_pending_results results = self._process_pending_results(iterator) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 129, in inner results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 661, in _process_pending_results post_process_whens(result_item, original_task, handler_templar) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 80, in post_process_whens result['changed'] = cond.evaluate_conditional(templar, templar.available_variables) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/playbook/conditional.py", line 112, in evaluate_conditional raise AnsibleError("The conditional check '%s' failed. The error was: %s" % (to_native(conditional), to_native(e)), obj=ds) UnboundLocalError: local variable 'conditional' referenced before assignment ```
https://github.com/ansible/ansible/issues/71627
https://github.com/ansible/ansible/pull/71719
2749d9fbf9242a59ed87f46ea057d84f4768a93e
394d216922d70709248a60f58da300f1e70f5894
2020-09-04T00:28:43Z
python
2022-02-04T11:35:23Z
test/integration/targets/add_host/tasks/main.yml
# test code for the add_host action # (c) 2015, Matt Davis <[email protected]> # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # See https://github.com/ansible/ansible/issues/36045 - set_fact: inventory_data: ansible_ssh_common_args: "-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" # ansible_ssh_host: "127.0.0.3" ansible_host: "127.0.0.3" ansible_ssh_pass: "foobar" # ansible_ssh_port: "2222" ansible_port: "2222" ansible_ssh_private_key_file: "/tmp/inventory-cloudj9cGz5/identity" ansible_ssh_user: "root" hostname: "newdynamichost2" - name: Show inventory_data for 36045 debug: msg: "{{ inventory_data }}" - name: Add host from dict 36045 add_host: "{{ inventory_data }}" - name: show newly added host debug: msg: "{{hostvars['newdynamichost2'].group_names}}" - name: ensure that dynamically-added newdynamichost2 is visible via hostvars, groups 36045 assert: that: - hostvars['newdynamichost2'] is defined - hostvars['newdynamichost2'].group_names is defined # end of https://github.com/ansible/ansible/issues/36045 related tests - name: add a host to the runtime inventory add_host: name: newdynamichost groups: newdynamicgroup a_var: from add_host - debug: msg={{hostvars['newdynamichost'].group_names}} - name: ensure that dynamically-added host is visible via hostvars, groups, etc (there are several caches that could break this) assert: that: - hostvars['bogushost'] is not defined # there was a bug where an undefined host was a "type" instead of an instance- ensure this works before we rely on it - hostvars['newdynamichost'] is defined - hostvars['newdynamichost'].group_names is defined - "'newdynamicgroup' in hostvars['newdynamichost'].group_names" - hostvars['newdynamichost']['bogusvar'] is not defined - hostvars['newdynamichost']['a_var'] is defined - hostvars['newdynamichost']['a_var'] == 'from add_host' - groups['bogusgroup'] is not defined # same check as above to ensure that bogus groups are undefined... - groups['newdynamicgroup'] is defined - "'newdynamichost' in groups['newdynamicgroup']" # Tests for idempotency - name: Add testhost01 dynamic host add_host: name: testhost01 register: add_testhost01 - name: Try adding testhost01 again, with no changes add_host: name: testhost01 register: add_testhost01_idem - name: Add a host variable to testhost01 add_host: name: testhost01 foo: bar register: hostvar_testhost01 - name: Add the same host variable to testhost01, with no changes add_host: name: testhost01 foo: bar register: hostvar_testhost01_idem - name: Add another host, testhost02 add_host: name: testhost02 register: add_testhost02 - name: Add it again for good measure add_host: name: testhost02 register: add_testhost02_idem - name: Add testhost02 to a group add_host: name: testhost02 groups: - testhostgroup register: add_group_testhost02 - name: Add testhost01 to the same group add_host: name: testhost01 groups: - testhostgroup register: add_group_testhost01 - name: Add testhost02 to the group again add_host: name: testhost02 groups: - testhostgroup register: add_group_testhost02_idem - name: Add testhost01 to the group again add_host: name: testhost01 groups: - testhostgroup register: add_group_testhost01_idem - assert: that: - add_testhost01 is changed - add_testhost01_idem is not changed - hostvar_testhost01 is changed - hostvar_testhost01_idem is not changed - add_testhost02 is changed - add_testhost02_idem is not changed - add_group_testhost02 is changed - add_group_testhost01 is changed - add_group_testhost02_idem is not changed - add_group_testhost01_idem is not changed - groups['testhostgroup']|length == 2 - "'testhost01' in groups['testhostgroup']" - "'testhost02' in groups['testhostgroup']" - hostvars['testhost01']['foo'] == 'bar' - name: Give invalid input add_host: namenewdynamichost groupsnewdynamicgroup a_varfromadd_host ignore_errors: true register: badinput - name: verify we detected bad input assert: that: - badinput is failed
closed
ansible/ansible
https://github.com/ansible/ansible
71,627
add_host module no longer returns changed=true, errors with changed_when:
##### SUMMARY The behavior in Ansible 2.9 was that `add_host:` always came back as changed. Maybe if the host already existed it should return ok, but I expect that when a host is actually created it is changed. Sometime between 2.9 and 2.11 it started to always return ok. Trying to make it changed with `changed_when: true` produces a traceback. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME add_host module ##### ANSIBLE VERSION ``` $ ansible --version [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible 2.11.0.dev0 config file = /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/awx_collection/bin/ansible python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] ``` ##### CONFIGURATION ``` $ ansible-config dump --only-changed [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ANSIBLE_NOCOWS(env: ANSIBLE_NOCOWS) = True DEPRECATION_WARNINGS(/Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg) = False ``` ##### OS / ENVIRONMENT Local action, Mac OS ##### STEPS TO REPRODUCE Run this playbook with >=1 host in the inventory ```yaml --- - name: add hosts to inventory hosts: all gather_facts: false connection: local vars: num_hosts: 10 tasks: - name: create inventory add_host: name: 'host-{{item}}' groups: dynamic ansible_connection: local host_id: '{{item}}' with_sequence: start=1 end={{num_hosts}} format=%d # changed_when: true notify: - single host handler handlers: - name: single host handler command: 'true' ``` ##### EXPECTED RESULTS I expect the tasks to be changed, and I expect the handler to be ran. look at output from Ansible 2.9 ``` $ ansible-playbook -i host1, dynamic_inventory.yml PLAY [add hosts to inventory] ******************************************************************************************************************************************************* TASK [create inventory] ************************************************************************************************************************************************************* changed: [host1] => (item=1) changed: [host1] => (item=2) changed: [host1] => (item=3) changed: [host1] => (item=4) changed: [host1] => (item=5) changed: [host1] => (item=6) changed: [host1] => (item=7) changed: [host1] => (item=8) changed: [host1] => (item=9) changed: [host1] => (item=10) RUNNING HANDLER [single host handler] *********************************************************************************************************************************************** [WARNING]: Platform darwin on host host1 is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. changed: [host1] PLAY RECAP ************************************************************************************************************************************************************************** host1 : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` ##### ACTUAL RESULTS with `# changed_when: true` left as a comment ``` $ ansible-playbook -i host1, dynamic_inventory.yml [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. PLAY [add hosts to inventory] ******************************************************************************************************************************************************* TASK [create inventory] ************************************************************************************************************************************************************* ok: [host1] => (item=1) ok: [host1] => (item=2) ok: [host1] => (item=3) ok: [host1] => (item=4) ok: [host1] => (item=5) ok: [host1] => (item=6) ok: [host1] => (item=7) ok: [host1] => (item=8) ok: [host1] => (item=9) ok: [host1] => (item=10) PLAY RECAP ************************************************************************************************************************************************************************** host1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` no handler is ran When uncommenting `changed_when: true` ``` $ ansible-playbook -i host1, dynamic_inventory.yml -vvv [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible-playbook 2.11.0.dev0 config file = /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/awx_collection/bin/ansible-playbook python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] Using /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg as config file Parsed host1, inventory source with host_list plugin PLAYBOOK: dynamic_inventory.yml ***************************************************************************************************************************************************** 1 plays in dynamic_inventory.yml PLAY [add hosts to inventory] ******************************************************************************************************************************************************* META: ran handlers TASK [create inventory] ************************************************************************************************************************************************************* task path: /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/dynamic_inventory.yml:9 creating host via 'add_host': hostname=host-1 changed: [host1] => (item=1) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-1", "host_vars": { "ansible_connection": "local", "host_id": "1" } }, "ansible_loop_var": "item", "changed": true, "item": "1" } creating host via 'add_host': hostname=host-2 changed: [host1] => (item=2) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-2", "host_vars": { "ansible_connection": "local", "host_id": "2" } }, "ansible_loop_var": "item", "changed": true, "item": "2" } creating host via 'add_host': hostname=host-3 changed: [host1] => (item=3) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-3", "host_vars": { "ansible_connection": "local", "host_id": "3" } }, "ansible_loop_var": "item", "changed": true, "item": "3" } creating host via 'add_host': hostname=host-4 changed: [host1] => (item=4) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-4", "host_vars": { "ansible_connection": "local", "host_id": "4" } }, "ansible_loop_var": "item", "changed": true, "item": "4" } creating host via 'add_host': hostname=host-5 changed: [host1] => (item=5) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-5", "host_vars": { "ansible_connection": "local", "host_id": "5" } }, "ansible_loop_var": "item", "changed": true, "item": "5" } creating host via 'add_host': hostname=host-6 changed: [host1] => (item=6) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-6", "host_vars": { "ansible_connection": "local", "host_id": "6" } }, "ansible_loop_var": "item", "changed": true, "item": "6" } creating host via 'add_host': hostname=host-7 changed: [host1] => (item=7) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-7", "host_vars": { "ansible_connection": "local", "host_id": "7" } }, "ansible_loop_var": "item", "changed": true, "item": "7" } creating host via 'add_host': hostname=host-8 changed: [host1] => (item=8) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-8", "host_vars": { "ansible_connection": "local", "host_id": "8" } }, "ansible_loop_var": "item", "changed": true, "item": "8" } creating host via 'add_host': hostname=host-9 changed: [host1] => (item=9) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-9", "host_vars": { "ansible_connection": "local", "host_id": "9" } }, "ansible_loop_var": "item", "changed": true, "item": "9" } creating host via 'add_host': hostname=host-10 changed: [host1] => (item=10) => { "add_host": { "groups": [ "dynamic" ], "host_name": "host-10", "host_vars": { "ansible_connection": "local", "host_id": "10" } }, "ansible_loop_var": "item", "changed": true, "item": "10" } NOTIFIED HANDLER single host handler for host1 ERROR! Unexpected Exception, this is probably a bug: local variable 'conditional' referenced before assignment the full traceback was: Traceback (most recent call last): File "/Users/alancoding/Documents/repos/ansible/lib/ansible/playbook/conditional.py", line 93, in evaluate_conditional for conditional in self.when: TypeError: 'bool' object is not iterable During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/alancoding/Documents/repos/ansible/bin/ansible-playbook", line 125, in <module> exit_code = cli.run() File "/Users/alancoding/Documents/repos/ansible/lib/ansible/cli/playbook.py", line 128, in run results = pbex.run() File "/Users/alancoding/Documents/repos/ansible/lib/ansible/executor/playbook_executor.py", line 169, in run result = self._tqm.run(play=play) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/executor/task_queue_manager.py", line 292, in run play_return = strategy.run(iterator, play_context) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/linear.py", line 329, in run results += self._wait_on_pending_results(iterator) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 804, in _wait_on_pending_results results = self._process_pending_results(iterator) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 129, in inner results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 661, in _process_pending_results post_process_whens(result_item, original_task, handler_templar) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/strategy/__init__.py", line 80, in post_process_whens result['changed'] = cond.evaluate_conditional(templar, templar.available_variables) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/playbook/conditional.py", line 112, in evaluate_conditional raise AnsibleError("The conditional check '%s' failed. The error was: %s" % (to_native(conditional), to_native(e)), obj=ds) UnboundLocalError: local variable 'conditional' referenced before assignment ```
https://github.com/ansible/ansible/issues/71627
https://github.com/ansible/ansible/pull/71719
2749d9fbf9242a59ed87f46ea057d84f4768a93e
394d216922d70709248a60f58da300f1e70f5894
2020-09-04T00:28:43Z
python
2022-02-04T11:35:23Z
test/integration/targets/changed_when/tasks/main.yml
# test code for the changed_when parameter # (c) 2014, James Tanner <[email protected]> # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. - name: ensure shell is always changed shell: ls -al /tmp register: shell_result - debug: var=shell_result - name: changed should always be true for shell assert: that: - "shell_result.changed" - name: test changed_when override for shell shell: ls -al /tmp changed_when: False register: shell_result - debug: var=shell_result - name: changed should be false assert: that: - "not shell_result.changed" - name: Add hosts to test group and ensure it appears as changed group_by: key: "cw_test1_{{ inventory_hostname }}" register: groupby - name: verify its changed assert: that: - groupby is changed - name: Add hosts to test group and ensure it does NOT appear as changed group_by: key: "cw_test2_{{ inventory_hostname }}" changed_when: False register: groupby - name: verify its not changed assert: that: - groupby is not changed - name: invalid conditional command: echo foo changed_when: boomboomboom register: invalid_conditional ignore_errors: true - assert: that: - invalid_conditional is failed - invalid_conditional.stdout is defined - invalid_conditional.changed_when_result is contains('boomboomboom')
closed
ansible/ansible
https://github.com/ansible/ansible
76,007
delegate_to executing on wrong host
### Summary I have an inventory with two hosts, say H1 an H2. On H2, I execute a task with `delegate_to: H1`. It seems, however, that the task is eventually executed on H2. ### Issue Type Bug Report ### Component Name core ### Ansible Version ```console $ ansible --version ansible [core 2.11.5] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/mrks/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /home/mrks/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.9.7 (default, Aug 31 2021, 13:28:12) [GCC 11.1.0] jinja version = 3.0.2 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed <none> ``` ### OS / Environment Arch Linux ### Steps to Reproduce Playbook: ```yaml - hosts: H2 tasks: - shell: hostname register: foo delegate_to: H1 - debug: var: foo ``` ### Expected Results Expected foo.stdout = 'H1'. By the way, I am pretty sure that this worked for a long time. ### Actual Results ```console TASK [shell] ******************************************************************* changed: [H2 -> H1] TASK [debug] ******************************************************************* ok: [H2] => { "foo": { ... "stdout": "H2", "stdout_lines": [ "H2" ] } } ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76007
https://github.com/ansible/ansible/pull/76017
a5eadaf3fd5496bd1f100ff14badf5c4947185a2
be19863e44cc6b78706147b25489a73d7c8fbcb5
2021-10-11T19:53:38Z
python
2022-02-07T20:13:40Z
changelogs/fragments/ssh_use_right_host.yml
closed
ansible/ansible
https://github.com/ansible/ansible
76,007
delegate_to executing on wrong host
### Summary I have an inventory with two hosts, say H1 an H2. On H2, I execute a task with `delegate_to: H1`. It seems, however, that the task is eventually executed on H2. ### Issue Type Bug Report ### Component Name core ### Ansible Version ```console $ ansible --version ansible [core 2.11.5] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/mrks/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /home/mrks/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.9.7 (default, Aug 31 2021, 13:28:12) [GCC 11.1.0] jinja version = 3.0.2 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed <none> ``` ### OS / Environment Arch Linux ### Steps to Reproduce Playbook: ```yaml - hosts: H2 tasks: - shell: hostname register: foo delegate_to: H1 - debug: var: foo ``` ### Expected Results Expected foo.stdout = 'H1'. By the way, I am pretty sure that this worked for a long time. ### Actual Results ```console TASK [shell] ******************************************************************* changed: [H2 -> H1] TASK [debug] ******************************************************************* ok: [H2] => { "foo": { ... "stdout": "H2", "stdout_lines": [ "H2" ] } } ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76007
https://github.com/ansible/ansible/pull/76017
a5eadaf3fd5496bd1f100ff14badf5c4947185a2
be19863e44cc6b78706147b25489a73d7c8fbcb5
2021-10-11T19:53:38Z
python
2022-02-07T20:13:40Z
lib/ansible/executor/task_executor.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import os import pty import time import json import signal import subprocess import sys import termios import traceback from ansible import constants as C from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleConnectionFailure, AnsibleActionFail, AnsibleActionSkip from ansible.executor.task_result import TaskResult from ansible.executor.module_common import get_action_args_with_defaults from ansible.module_utils.parsing.convert_bool import boolean from ansible.module_utils.six import binary_type from ansible.module_utils._text import to_text, to_native from ansible.module_utils.connection import write_to_file_descriptor from ansible.playbook.conditional import Conditional from ansible.playbook.task import Task from ansible.plugins.loader import become_loader, cliconf_loader, connection_loader, httpapi_loader, netconf_loader, terminal_loader from ansible.template import Templar from ansible.utils.collection_loader import AnsibleCollectionConfig from ansible.utils.listify import listify_lookup_plugin_terms from ansible.utils.unsafe_proxy import to_unsafe_text, wrap_var from ansible.vars.clean import namespace_facts, clean_facts from ansible.utils.display import Display from ansible.utils.vars import combine_vars, isidentifier display = Display() RETURN_VARS = [x for x in C.MAGIC_VARIABLE_MAPPING.items() if 'become' not in x and '_pass' not in x] __all__ = ['TaskExecutor'] class TaskTimeoutError(BaseException): pass def task_timeout(signum, frame): raise TaskTimeoutError def remove_omit(task_args, omit_token): ''' Remove args with a value equal to the ``omit_token`` recursively to align with now having suboptions in the argument_spec ''' if not isinstance(task_args, dict): return task_args new_args = {} for i in task_args.items(): if i[1] == omit_token: continue elif isinstance(i[1], dict): new_args[i[0]] = remove_omit(i[1], omit_token) elif isinstance(i[1], list): new_args[i[0]] = [remove_omit(v, omit_token) for v in i[1]] else: new_args[i[0]] = i[1] return new_args class TaskExecutor: ''' This is the main worker class for the executor pipeline, which handles loading an action plugin to actually dispatch the task to a given host. This class roughly corresponds to the old Runner() class. ''' def __init__(self, host, task, job_vars, play_context, new_stdin, loader, shared_loader_obj, final_q): self._host = host self._task = task self._job_vars = job_vars self._play_context = play_context self._new_stdin = new_stdin self._loader = loader self._shared_loader_obj = shared_loader_obj self._connection = None self._final_q = final_q self._loop_eval_error = None self._task.squash() def run(self): ''' The main executor entrypoint, where we determine if the specified task requires looping and either runs the task with self._run_loop() or self._execute(). After that, the returned results are parsed and returned as a dict. ''' display.debug("in run() - task %s" % self._task._uuid) try: try: items = self._get_loop_items() except AnsibleUndefinedVariable as e: # save the error raised here for use later items = None self._loop_eval_error = e if items is not None: if len(items) > 0: item_results = self._run_loop(items) # create the overall result item res = dict(results=item_results) # loop through the item results and set the global changed/failed/skipped result flags based on any item. res['skipped'] = True for item in item_results: if 'changed' in item and item['changed'] and not res.get('changed'): res['changed'] = True if res['skipped'] and ('skipped' not in item or ('skipped' in item and not item['skipped'])): res['skipped'] = False if 'failed' in item and item['failed']: item_ignore = item.pop('_ansible_ignore_errors') if not res.get('failed'): res['failed'] = True res['msg'] = 'One or more items failed' self._task.ignore_errors = item_ignore elif self._task.ignore_errors and not item_ignore: self._task.ignore_errors = item_ignore # ensure to accumulate these for array in ['warnings', 'deprecations']: if array in item and item[array]: if array not in res: res[array] = [] if not isinstance(item[array], list): item[array] = [item[array]] res[array] = res[array] + item[array] del item[array] if not res.get('failed', False): res['msg'] = 'All items completed' if res['skipped']: res['msg'] = 'All items skipped' else: res = dict(changed=False, skipped=True, skipped_reason='No items in the list', results=[]) else: display.debug("calling self._execute()") res = self._execute() display.debug("_execute() done") # make sure changed is set in the result, if it's not present if 'changed' not in res: res['changed'] = False def _clean_res(res, errors='surrogate_or_strict'): if isinstance(res, binary_type): return to_unsafe_text(res, errors=errors) elif isinstance(res, dict): for k in res: try: res[k] = _clean_res(res[k], errors=errors) except UnicodeError: if k == 'diff': # If this is a diff, substitute a replacement character if the value # is undecodable as utf8. (Fix #21804) display.warning("We were unable to decode all characters in the module return data." " Replaced some in an effort to return as much as possible") res[k] = _clean_res(res[k], errors='surrogate_then_replace') else: raise elif isinstance(res, list): for idx, item in enumerate(res): res[idx] = _clean_res(item, errors=errors) return res display.debug("dumping result to json") res = _clean_res(res) display.debug("done dumping result, returning") return res except AnsibleError as e: return dict(failed=True, msg=wrap_var(to_text(e, nonstring='simplerepr')), _ansible_no_log=self._play_context.no_log) except Exception as e: return dict(failed=True, msg='Unexpected failure during module execution.', exception=to_text(traceback.format_exc()), stdout='', _ansible_no_log=self._play_context.no_log) finally: try: self._connection.close() except AttributeError: pass except Exception as e: display.debug(u"error closing connection: %s" % to_text(e)) def _get_loop_items(self): ''' Loads a lookup plugin to handle the with_* portion of a task (if specified), and returns the items result. ''' # get search path for this task to pass to lookup plugins self._job_vars['ansible_search_path'] = self._task.get_search_path() # ensure basedir is always in (dwim already searches here but we need to display it) if self._loader.get_basedir() not in self._job_vars['ansible_search_path']: self._job_vars['ansible_search_path'].append(self._loader.get_basedir()) templar = Templar(loader=self._loader, variables=self._job_vars) items = None loop_cache = self._job_vars.get('_ansible_loop_cache') if loop_cache is not None: # _ansible_loop_cache may be set in `get_vars` when calculating `delegate_to` # to avoid reprocessing the loop items = loop_cache elif self._task.loop_with: if self._task.loop_with in self._shared_loader_obj.lookup_loader: fail = True if self._task.loop_with == 'first_found': # first_found loops are special. If the item is undefined then we want to fall through to the next value rather than failing. fail = False loop_terms = listify_lookup_plugin_terms(terms=self._task.loop, templar=templar, loader=self._loader, fail_on_undefined=fail, convert_bare=False) if not fail: loop_terms = [t for t in loop_terms if not templar.is_template(t)] # get lookup mylookup = self._shared_loader_obj.lookup_loader.get(self._task.loop_with, loader=self._loader, templar=templar) # give lookup task 'context' for subdir (mostly needed for first_found) for subdir in ['template', 'var', 'file']: # TODO: move this to constants? if subdir in self._task.action: break setattr(mylookup, '_subdir', subdir + 's') # run lookup items = wrap_var(mylookup.run(terms=loop_terms, variables=self._job_vars, wantlist=True)) else: raise AnsibleError("Unexpected failure in finding the lookup named '%s' in the available lookup plugins" % self._task.loop_with) elif self._task.loop is not None: items = templar.template(self._task.loop) if not isinstance(items, list): raise AnsibleError( "Invalid data passed to 'loop', it requires a list, got this instead: %s." " Hint: If you passed a list/dict of just one element," " try adding wantlist=True to your lookup invocation or use q/query instead of lookup." % items ) return items def _run_loop(self, items): ''' Runs the task with the loop items specified and collates the result into an array named 'results' which is inserted into the final result along with the item for which the loop ran. ''' results = [] # make copies of the job vars and task so we can add the item to # the variables and re-validate the task with the item variable # task_vars = self._job_vars.copy() task_vars = self._job_vars loop_var = 'item' index_var = None label = None loop_pause = 0 extended = False templar = Templar(loader=self._loader, variables=self._job_vars) # FIXME: move this to the object itself to allow post_validate to take care of templating (loop_control.post_validate) if self._task.loop_control: loop_var = templar.template(self._task.loop_control.loop_var) index_var = templar.template(self._task.loop_control.index_var) loop_pause = templar.template(self._task.loop_control.pause) extended = templar.template(self._task.loop_control.extended) # This may be 'None',so it is templated below after we ensure a value and an item is assigned label = self._task.loop_control.label # ensure we always have a label if label is None: label = '{{' + loop_var + '}}' if loop_var in task_vars: display.warning(u"%s: The loop variable '%s' is already in use. " u"You should set the `loop_var` value in the `loop_control` option for the task" u" to something else to avoid variable collisions and unexpected behavior." % (self._task, loop_var)) ran_once = False no_log = False items_len = len(items) for item_index, item in enumerate(items): task_vars['ansible_loop_var'] = loop_var task_vars[loop_var] = item if index_var: task_vars['ansible_index_var'] = index_var task_vars[index_var] = item_index if extended: task_vars['ansible_loop'] = { 'allitems': items, 'index': item_index + 1, 'index0': item_index, 'first': item_index == 0, 'last': item_index + 1 == items_len, 'length': items_len, 'revindex': items_len - item_index, 'revindex0': items_len - item_index - 1, } try: task_vars['ansible_loop']['nextitem'] = items[item_index + 1] except IndexError: pass if item_index - 1 >= 0: task_vars['ansible_loop']['previtem'] = items[item_index - 1] # Update template vars to reflect current loop iteration templar.available_variables = task_vars # pause between loop iterations if loop_pause and ran_once: try: time.sleep(float(loop_pause)) except ValueError as e: raise AnsibleError('Invalid pause value: %s, produced error: %s' % (loop_pause, to_native(e))) else: ran_once = True try: tmp_task = self._task.copy(exclude_parent=True, exclude_tasks=True) tmp_task._parent = self._task._parent tmp_play_context = self._play_context.copy() except AnsibleParserError as e: results.append(dict(failed=True, msg=to_text(e))) continue # now we swap the internal task and play context with their copies, # execute, and swap them back so we can do the next iteration cleanly (self._task, tmp_task) = (tmp_task, self._task) (self._play_context, tmp_play_context) = (tmp_play_context, self._play_context) res = self._execute(variables=task_vars) task_fields = self._task.dump_attrs() (self._task, tmp_task) = (tmp_task, self._task) (self._play_context, tmp_play_context) = (tmp_play_context, self._play_context) # update 'general no_log' based on specific no_log no_log = no_log or tmp_task.no_log # now update the result with the item info, and append the result # to the list of results res[loop_var] = item res['ansible_loop_var'] = loop_var if index_var: res[index_var] = item_index res['ansible_index_var'] = index_var if extended: res['ansible_loop'] = task_vars['ansible_loop'] res['_ansible_item_result'] = True res['_ansible_ignore_errors'] = task_fields.get('ignore_errors') # gets templated here unlike rest of loop_control fields, depends on loop_var above try: res['_ansible_item_label'] = templar.template(label, cache=False) except AnsibleUndefinedVariable as e: res.update({ 'failed': True, 'msg': 'Failed to template loop_control.label: %s' % to_text(e) }) tr = TaskResult( self._host.name, self._task._uuid, res, task_fields=task_fields, ) if tr.is_failed() or tr.is_unreachable(): self._final_q.send_callback('v2_runner_item_on_failed', tr) elif tr.is_skipped(): self._final_q.send_callback('v2_runner_item_on_skipped', tr) else: if getattr(self._task, 'diff', False): self._final_q.send_callback('v2_on_file_diff', tr) if self._task.action not in C._ACTION_INVENTORY_TASKS: self._final_q.send_callback('v2_runner_item_on_ok', tr) results.append(res) del task_vars[loop_var] # clear 'connection related' plugin variables for next iteration if self._connection: clear_plugins = { 'connection': self._connection._load_name, 'shell': self._connection._shell._load_name } if self._connection.become: clear_plugins['become'] = self._connection.become._load_name for plugin_type, plugin_name in clear_plugins.items(): for var in C.config.get_plugin_vars(plugin_type, plugin_name): if var in task_vars and var not in self._job_vars: del task_vars[var] self._task.no_log = no_log return results def _execute(self, variables=None): ''' The primary workhorse of the executor system, this runs the task on the specified host (which may be the delegated_to host) and handles the retry/until and block rescue/always execution ''' if variables is None: variables = self._job_vars templar = Templar(loader=self._loader, variables=variables) context_validation_error = None try: # TODO: remove play_context as this does not take delegation into account, task itself should hold values # for connection/shell/become/terminal plugin options to finalize. # Kept for now for backwards compatibility and a few functions that are still exclusive to it. # apply the given task's information to the connection info, # which may override some fields already set by the play or # the options specified on the command line self._play_context = self._play_context.set_task_and_variable_override(task=self._task, variables=variables, templar=templar) # fields set from the play/task may be based on variables, so we have to # do the same kind of post validation step on it here before we use it. self._play_context.post_validate(templar=templar) # now that the play context is finalized, if the remote_addr is not set # default to using the host's address field as the remote address if not self._play_context.remote_addr: self._play_context.remote_addr = self._host.address # We also add "magic" variables back into the variables dict to make sure # a certain subset of variables exist. self._play_context.update_vars(variables) except AnsibleError as e: # save the error, which we'll raise later if we don't end up # skipping this task during the conditional evaluation step context_validation_error = e # Evaluate the conditional (if any) for this task, which we do before running # the final task post-validation. We do this before the post validation due to # the fact that the conditional may specify that the task be skipped due to a # variable not being present which would otherwise cause validation to fail try: if not self._task.evaluate_conditional(templar, variables): display.debug("when evaluation is False, skipping this task") return dict(changed=False, skipped=True, skip_reason='Conditional result was False', _ansible_no_log=self._play_context.no_log) except AnsibleError as e: # loop error takes precedence if self._loop_eval_error is not None: # Display the error from the conditional as well to prevent # losing information useful for debugging. display.v(to_text(e)) raise self._loop_eval_error # pylint: disable=raising-bad-type raise # Not skipping, if we had loop error raised earlier we need to raise it now to halt the execution of this task if self._loop_eval_error is not None: raise self._loop_eval_error # pylint: disable=raising-bad-type # if we ran into an error while setting up the PlayContext, raise it now, unless is known issue with delegation if context_validation_error is not None and not (self._task.delegate_to and isinstance(context_validation_error, AnsibleUndefinedVariable)): raise context_validation_error # pylint: disable=raising-bad-type # if this task is a TaskInclude, we just return now with a success code so the # main thread can expand the task list for the given host if self._task.action in C._ACTION_ALL_INCLUDE_TASKS: include_args = self._task.args.copy() include_file = include_args.pop('_raw_params', None) if not include_file: return dict(failed=True, msg="No include file was specified to the include") include_file = templar.template(include_file) return dict(include=include_file, include_args=include_args) # if this task is a IncludeRole, we just return now with a success code so the main thread can expand the task list for the given host elif self._task.action in C._ACTION_INCLUDE_ROLE: include_args = self._task.args.copy() return dict(include_args=include_args) # Now we do final validation on the task, which sets all fields to their final values. try: self._task.post_validate(templar=templar) except AnsibleError: raise except Exception: return dict(changed=False, failed=True, _ansible_no_log=self._play_context.no_log, exception=to_text(traceback.format_exc())) if '_variable_params' in self._task.args: variable_params = self._task.args.pop('_variable_params') if isinstance(variable_params, dict): if C.INJECT_FACTS_AS_VARS: display.warning("Using a variable for a task's 'args' is unsafe in some situations " "(see https://docs.ansible.com/ansible/devel/reference_appendices/faq.html#argsplat-unsafe)") variable_params.update(self._task.args) self._task.args = variable_params if self._task.delegate_to: # use vars from delegated host (which already include task vars) instead of original host cvars = variables.get('ansible_delegated_vars', {}).get(self._task.delegate_to, {}) orig_vars = templar.available_variables else: # just use normal host vars cvars = orig_vars = variables templar.available_variables = cvars # get the connection and the handler for this execution if (not self._connection or not getattr(self._connection, 'connected', False) or self._play_context.remote_addr != self._connection._play_context.remote_addr): self._connection = self._get_connection(cvars, templar) else: # if connection is reused, its _play_context is no longer valid and needs # to be replaced with the one templated above, in case other data changed self._connection._play_context = self._play_context plugin_vars = self._set_connection_options(cvars, templar) templar.available_variables = orig_vars # TODO: eventually remove this block as this should be a 'consequence' of 'forced_local' modules # special handling for python interpreter for network_os, default to ansible python unless overriden if 'ansible_network_os' in cvars and 'ansible_python_interpreter' not in cvars: # this also avoids 'python discovery' cvars['ansible_python_interpreter'] = sys.executable # get handler self._handler = self._get_action_handler(connection=self._connection, templar=templar) # Apply default params for action/module, if present self._task.args = get_action_args_with_defaults( self._task.resolved_action, self._task.args, self._task.module_defaults, templar, action_groups=self._task._parent._play._action_groups ) # And filter out any fields which were set to default(omit), and got the omit token value omit_token = variables.get('omit') if omit_token is not None: self._task.args = remove_omit(self._task.args, omit_token) # Read some values from the task, so that we can modify them if need be if self._task.until: retries = self._task.retries if retries is None: retries = 3 elif retries <= 0: retries = 1 else: retries += 1 else: retries = 1 delay = self._task.delay if delay < 0: delay = 1 # make a copy of the job vars here, in case we need to update them # with the registered variable value later on when testing conditions vars_copy = variables.copy() display.debug("starting attempt loop") result = None for attempt in range(1, retries + 1): display.debug("running the handler") try: if self._task.timeout: old_sig = signal.signal(signal.SIGALRM, task_timeout) signal.alarm(self._task.timeout) result = self._handler.run(task_vars=variables) except (AnsibleActionFail, AnsibleActionSkip) as e: return e.result except AnsibleConnectionFailure as e: return dict(unreachable=True, msg=to_text(e)) except TaskTimeoutError as e: msg = 'The %s action failed to execute in the expected time frame (%d) and was terminated' % (self._task.action, self._task.timeout) return dict(failed=True, msg=msg) finally: if self._task.timeout: signal.alarm(0) old_sig = signal.signal(signal.SIGALRM, old_sig) self._handler.cleanup() display.debug("handler run complete") # preserve no log result["_ansible_no_log"] = self._play_context.no_log if self._task.action not in C._ACTION_WITH_CLEAN_FACTS: result = wrap_var(result) # update the local copy of vars with the registered value, if specified, # or any facts which may have been generated by the module execution if self._task.register: if not isidentifier(self._task.register): raise AnsibleError("Invalid variable name in 'register' specified: '%s'" % self._task.register) vars_copy[self._task.register] = result if self._task.async_val > 0: if self._task.poll > 0 and not result.get('skipped') and not result.get('failed'): result = self._poll_async_result(result=result, templar=templar, task_vars=vars_copy) if result.get('failed'): self._final_q.send_callback( 'v2_runner_on_async_failed', TaskResult(self._host.name, self._task._uuid, result, task_fields=self._task.dump_attrs())) else: self._final_q.send_callback( 'v2_runner_on_async_ok', TaskResult(self._host.name, self._task._uuid, result, task_fields=self._task.dump_attrs())) # ensure no log is preserved result["_ansible_no_log"] = self._play_context.no_log # helper methods for use below in evaluating changed/failed_when def _evaluate_changed_when_result(result): if self._task.changed_when is not None and self._task.changed_when: cond = Conditional(loader=self._loader) cond.when = self._task.changed_when result['changed'] = cond.evaluate_conditional(templar, vars_copy) def _evaluate_failed_when_result(result): if self._task.failed_when: cond = Conditional(loader=self._loader) cond.when = self._task.failed_when failed_when_result = cond.evaluate_conditional(templar, vars_copy) result['failed_when_result'] = result['failed'] = failed_when_result else: failed_when_result = False return failed_when_result if 'ansible_facts' in result and self._task.action not in C._ACTION_DEBUG: if self._task.action in C._ACTION_WITH_CLEAN_FACTS: if self._task.delegate_to and self._task.delegate_facts: if '_ansible_delegated_vars' in vars_copy: vars_copy['_ansible_delegated_vars'].update(result['ansible_facts']) else: vars_copy['_ansible_delegated_vars'] = result['ansible_facts'] else: vars_copy.update(result['ansible_facts']) else: # TODO: cleaning of facts should eventually become part of taskresults instead of vars af = wrap_var(result['ansible_facts']) vars_copy['ansible_facts'] = combine_vars(vars_copy.get('ansible_facts', {}), namespace_facts(af)) if C.INJECT_FACTS_AS_VARS: vars_copy.update(clean_facts(af)) # set the failed property if it was missing. if 'failed' not in result: # rc is here for backwards compatibility and modules that use it instead of 'failed' if 'rc' in result and result['rc'] not in [0, "0"]: result['failed'] = True else: result['failed'] = False # Make attempts and retries available early to allow their use in changed/failed_when if self._task.until: result['attempts'] = attempt # set the changed property if it was missing. if 'changed' not in result: result['changed'] = False if self._task.action not in C._ACTION_WITH_CLEAN_FACTS: result = wrap_var(result) # re-update the local copy of vars with the registered value, if specified, # or any facts which may have been generated by the module execution # This gives changed/failed_when access to additional recently modified # attributes of result if self._task.register: vars_copy[self._task.register] = result # if we didn't skip this task, use the helpers to evaluate the changed/ # failed_when properties if 'skipped' not in result: try: condname = 'changed' _evaluate_changed_when_result(result) condname = 'failed' _evaluate_failed_when_result(result) except AnsibleError as e: result['failed'] = True result['%s_when_result' % condname] = to_text(e) if retries > 1: cond = Conditional(loader=self._loader) cond.when = self._task.until if cond.evaluate_conditional(templar, vars_copy): break else: # no conditional check, or it failed, so sleep for the specified time if attempt < retries: result['_ansible_retry'] = True result['retries'] = retries display.debug('Retrying task, attempt %d of %d' % (attempt, retries)) self._final_q.send_callback( 'v2_runner_retry', TaskResult( self._host.name, self._task._uuid, result, task_fields=self._task.dump_attrs() ) ) time.sleep(delay) self._handler = self._get_action_handler(connection=self._connection, templar=templar) else: if retries > 1: # we ran out of attempts, so mark the result as failed result['attempts'] = retries - 1 result['failed'] = True if self._task.action not in C._ACTION_WITH_CLEAN_FACTS: result = wrap_var(result) # do the final update of the local variables here, for both registered # values and any facts which may have been created if self._task.register: variables[self._task.register] = result if 'ansible_facts' in result and self._task.action not in C._ACTION_DEBUG: if self._task.action in C._ACTION_WITH_CLEAN_FACTS: if self._task.delegate_to and self._task.delegate_facts: if '_ansible_delegated_vars' in variables: variables['_ansible_delegated_vars'].update(result['ansible_facts']) else: variables['_ansible_delegated_vars'] = result['ansible_facts'] else: variables.update(result['ansible_facts']) else: # TODO: cleaning of facts should eventually become part of taskresults instead of vars af = wrap_var(result['ansible_facts']) variables['ansible_facts'] = combine_vars(variables.get('ansible_facts', {}), namespace_facts(af)) if C.INJECT_FACTS_AS_VARS: variables.update(clean_facts(af)) # save the notification target in the result, if it was specified, as # this task may be running in a loop in which case the notification # may be item-specific, ie. "notify: service {{item}}" if self._task.notify is not None: result['_ansible_notify'] = self._task.notify # add the delegated vars to the result, so we can reference them # on the results side without having to do any further templating # also now add conneciton vars results when delegating if self._task.delegate_to: result["_ansible_delegated_vars"] = {'ansible_delegated_host': self._task.delegate_to} for k in plugin_vars: result["_ansible_delegated_vars"][k] = cvars.get(k) # note: here for callbacks that rely on this info to display delegation for requireshed in ('ansible_host', 'ansible_port', 'ansible_user', 'ansible_connection'): if requireshed not in result["_ansible_delegated_vars"] and requireshed in cvars: result["_ansible_delegated_vars"][requireshed] = cvars.get(requireshed) # and return display.debug("attempt loop complete, returning result") return result def _poll_async_result(self, result, templar, task_vars=None): ''' Polls for the specified JID to be complete ''' if task_vars is None: task_vars = self._job_vars async_jid = result.get('ansible_job_id') if async_jid is None: return dict(failed=True, msg="No job id was returned by the async task") # Create a new pseudo-task to run the async_status module, and run # that (with a sleep for "poll" seconds between each retry) until the # async time limit is exceeded. async_task = Task.load(dict(action='async_status', args={'jid': async_jid}, environment=self._task.environment)) # FIXME: this is no longer the case, normal takes care of all, see if this can just be generalized # Because this is an async task, the action handler is async. However, # we need the 'normal' action handler for the status check, so get it # now via the action_loader async_handler = self._shared_loader_obj.action_loader.get( 'ansible.legacy.async_status', task=async_task, connection=self._connection, play_context=self._play_context, loader=self._loader, templar=templar, shared_loader_obj=self._shared_loader_obj, ) time_left = self._task.async_val while time_left > 0: time.sleep(self._task.poll) try: async_result = async_handler.run(task_vars=task_vars) # We do not bail out of the loop in cases where the failure # is associated with a parsing error. The async_runner can # have issues which result in a half-written/unparseable result # file on disk, which manifests to the user as a timeout happening # before it's time to timeout. if (int(async_result.get('finished', 0)) == 1 or ('failed' in async_result and async_result.get('_ansible_parsed', False)) or 'skipped' in async_result): break except Exception as e: # Connections can raise exceptions during polling (eg, network bounce, reboot); these should be non-fatal. # On an exception, call the connection's reset method if it has one # (eg, drop/recreate WinRM connection; some reused connections are in a broken state) display.vvvv("Exception during async poll, retrying... (%s)" % to_text(e)) display.debug("Async poll exception was:\n%s" % to_text(traceback.format_exc())) try: async_handler._connection.reset() except AttributeError: pass # Little hack to raise the exception if we've exhausted the timeout period time_left -= self._task.poll if time_left <= 0: raise else: time_left -= self._task.poll self._final_q.send_callback( 'v2_runner_on_async_poll', TaskResult( self._host.name, async_task._uuid, async_result, task_fields=async_task.dump_attrs(), ), ) if int(async_result.get('finished', 0)) != 1: if async_result.get('_ansible_parsed'): return dict(failed=True, msg="async task did not complete within the requested time - %ss" % self._task.async_val, async_result=async_result) else: return dict(failed=True, msg="async task produced unparseable results", async_result=async_result) else: # If the async task finished, automatically cleanup the temporary # status file left behind. cleanup_task = Task.load( { 'async_status': { 'jid': async_jid, 'mode': 'cleanup', }, 'environment': self._task.environment, } ) cleanup_handler = self._shared_loader_obj.action_loader.get( 'ansible.legacy.async_status', task=cleanup_task, connection=self._connection, play_context=self._play_context, loader=self._loader, templar=templar, shared_loader_obj=self._shared_loader_obj, ) cleanup_handler.run(task_vars=task_vars) cleanup_handler.cleanup(force=True) async_handler.cleanup(force=True) return async_result def _get_become(self, name): become = become_loader.get(name) if not become: raise AnsibleError("Invalid become method specified, could not find matching plugin: '%s'. " "Use `ansible-doc -t become -l` to list available plugins." % name) return become def _get_connection(self, cvars, templar): ''' Reads the connection property for the host, and returns the correct connection object from the list of connection plugins ''' # use magic var if it exists, if not, let task inheritance do it's thing. if cvars.get('ansible_connection') is not None: self._play_context.connection = templar.template(cvars['ansible_connection']) else: self._play_context.connection = self._task.connection # TODO: play context has logic to update the connection for 'smart' # (default value, will chose between ssh and paramiko) and 'persistent' # (really paramiko), eventually this should move to task object itself. connection_name = self._play_context.connection # load connection conn_type = connection_name connection, plugin_load_context = self._shared_loader_obj.connection_loader.get_with_context( conn_type, self._play_context, self._new_stdin, task_uuid=self._task._uuid, ansible_playbook_pid=to_text(os.getppid()) ) if not connection: raise AnsibleError("the connection plugin '%s' was not found" % conn_type) # load become plugin if needed if cvars.get('ansible_become') is not None: become = boolean(templar.template(cvars['ansible_become'])) else: become = self._task.become if become: if cvars.get('ansible_become_method'): become_plugin = self._get_become(templar.template(cvars['ansible_become_method'])) else: become_plugin = self._get_become(self._task.become_method) try: connection.set_become_plugin(become_plugin) except AttributeError: # Older connection plugin that does not support set_become_plugin pass if getattr(connection.become, 'require_tty', False) and not getattr(connection, 'has_tty', False): raise AnsibleError( "The '%s' connection does not provide a TTY which is required for the selected " "become plugin: %s." % (conn_type, become_plugin.name) ) # Backwards compat for connection plugins that don't support become plugins # Just do this unconditionally for now, we could move it inside of the # AttributeError above later self._play_context.set_become_plugin(become_plugin.name) # Also backwards compat call for those still using play_context self._play_context.set_attributes_from_plugin(connection) if any(((connection.supports_persistence and C.USE_PERSISTENT_CONNECTIONS), connection.force_persistence)): self._play_context.timeout = connection.get_option('persistent_command_timeout') display.vvvv('attempting to start connection', host=self._play_context.remote_addr) display.vvvv('using connection plugin %s' % connection.transport, host=self._play_context.remote_addr) options = self._get_persistent_connection_options(connection, cvars, templar) socket_path = start_connection(self._play_context, options, self._task._uuid) display.vvvv('local domain socket path is %s' % socket_path, host=self._play_context.remote_addr) setattr(connection, '_socket_path', socket_path) return connection def _get_persistent_connection_options(self, connection, final_vars, templar): option_vars = C.config.get_plugin_vars('connection', connection._load_name) plugin = connection._sub_plugin if plugin.get('type'): option_vars.extend(C.config.get_plugin_vars(plugin['type'], plugin['name'])) options = {} for k in option_vars: if k in final_vars: options[k] = templar.template(final_vars[k]) return options def _set_plugin_options(self, plugin_type, variables, templar, task_keys): try: plugin = getattr(self._connection, '_%s' % plugin_type) except AttributeError: # Some plugins are assigned to private attrs, ``become`` is not plugin = getattr(self._connection, plugin_type) option_vars = C.config.get_plugin_vars(plugin_type, plugin._load_name) options = {} for k in option_vars: if k in variables: options[k] = templar.template(variables[k]) # TODO move to task method? plugin.set_options(task_keys=task_keys, var_options=options) return option_vars def _set_connection_options(self, variables, templar): # keep list of variable names possibly consumed varnames = [] # grab list of usable vars for this plugin option_vars = C.config.get_plugin_vars('connection', self._connection._load_name) varnames.extend(option_vars) # create dict of 'templated vars' options = {'_extras': {}} for k in option_vars: if k in variables: options[k] = templar.template(variables[k]) # add extras if plugin supports them if getattr(self._connection, 'allow_extras', False): for k in variables: if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options: options['_extras'][k] = templar.template(variables[k]) task_keys = self._task.dump_attrs() # The task_keys 'timeout' attr is the task's timeout, not the connection timeout. # The connection timeout is threaded through the play_context for now. task_keys['timeout'] = self._play_context.timeout if self._play_context.password: # The connection password is threaded through the play_context for # now. This is something we ultimately want to avoid, but the first # step is to get connection plugins pulling the password through the # config system instead of directly accessing play_context. task_keys['password'] = self._play_context.password # Prevent task retries from overriding connection retries del(task_keys['retries']) # set options with 'templated vars' specific to this plugin and dependent ones self._connection.set_options(task_keys=task_keys, var_options=options) varnames.extend(self._set_plugin_options('shell', variables, templar, task_keys)) if self._connection.become is not None: if self._play_context.become_pass: # FIXME: eventually remove from task and play_context, here for backwards compat # keep out of play objects to avoid accidental disclosure, only become plugin should have # The become pass is already in the play_context if given on # the CLI (-K). Make the plugin aware of it in this case. task_keys['become_pass'] = self._play_context.become_pass varnames.extend(self._set_plugin_options('become', variables, templar, task_keys)) # FOR BACKWARDS COMPAT: for option in ('become_user', 'become_flags', 'become_exe', 'become_pass'): try: setattr(self._play_context, option, self._connection.become.get_option(option)) except KeyError: pass # some plugins don't support all base flags self._play_context.prompt = self._connection.become.prompt return varnames def _get_action_handler(self, connection, templar): ''' Returns the correct action plugin to handle the requestion task action ''' module_collection, separator, module_name = self._task.action.rpartition(".") module_prefix = module_name.split('_')[0] if module_collection: # For network modules, which look for one action plugin per platform, look for the # action plugin in the same collection as the module by prefixing the action plugin # with the same collection. network_action = "{0}.{1}".format(module_collection, module_prefix) else: network_action = module_prefix collections = self._task.collections # let action plugin override module, fallback to 'normal' action plugin otherwise if self._shared_loader_obj.action_loader.has_plugin(self._task.action, collection_list=collections): handler_name = self._task.action elif all((module_prefix in C.NETWORK_GROUP_MODULES, self._shared_loader_obj.action_loader.has_plugin(network_action, collection_list=collections))): handler_name = network_action display.vvvv("Using network group action {handler} for {action}".format(handler=handler_name, action=self._task.action), host=self._play_context.remote_addr) else: # use ansible.legacy.normal to allow (historic) local action_plugins/ override without collections search handler_name = 'ansible.legacy.normal' collections = None # until then, we don't want the task's collection list to be consulted; use the builtin handler = self._shared_loader_obj.action_loader.get( handler_name, task=self._task, connection=connection, play_context=self._play_context, loader=self._loader, templar=templar, shared_loader_obj=self._shared_loader_obj, collection_list=collections ) if not handler: raise AnsibleError("the handler '%s' was not found" % handler_name) return handler def start_connection(play_context, variables, task_uuid): ''' Starts the persistent connection ''' candidate_paths = [C.ANSIBLE_CONNECTION_PATH or os.path.dirname(sys.argv[0])] candidate_paths.extend(os.environ.get('PATH', '').split(os.pathsep)) for dirname in candidate_paths: ansible_connection = os.path.join(dirname, 'ansible-connection') if os.path.isfile(ansible_connection): display.vvvv("Found ansible-connection at path {0}".format(ansible_connection)) break else: raise AnsibleError("Unable to find location of 'ansible-connection'. " "Please set or check the value of ANSIBLE_CONNECTION_PATH") env = os.environ.copy() env.update({ # HACK; most of these paths may change during the controller's lifetime # (eg, due to late dynamic role includes, multi-playbook execution), without a way # to invalidate/update, ansible-connection won't always see the same plugins the controller # can. 'ANSIBLE_BECOME_PLUGINS': become_loader.print_paths(), 'ANSIBLE_CLICONF_PLUGINS': cliconf_loader.print_paths(), 'ANSIBLE_COLLECTIONS_PATH': to_native(os.pathsep.join(AnsibleCollectionConfig.collection_paths)), 'ANSIBLE_CONNECTION_PLUGINS': connection_loader.print_paths(), 'ANSIBLE_HTTPAPI_PLUGINS': httpapi_loader.print_paths(), 'ANSIBLE_NETCONF_PLUGINS': netconf_loader.print_paths(), 'ANSIBLE_TERMINAL_PLUGINS': terminal_loader.print_paths(), }) python = sys.executable master, slave = pty.openpty() p = subprocess.Popen( [python, ansible_connection, to_text(os.getppid()), to_text(task_uuid)], stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env ) os.close(slave) # We need to set the pty into noncanonical mode. This ensures that we # can receive lines longer than 4095 characters (plus newline) without # truncating. old = termios.tcgetattr(master) new = termios.tcgetattr(master) new[3] = new[3] & ~termios.ICANON try: termios.tcsetattr(master, termios.TCSANOW, new) write_to_file_descriptor(master, variables) write_to_file_descriptor(master, play_context.serialize()) (stdout, stderr) = p.communicate() finally: termios.tcsetattr(master, termios.TCSANOW, old) os.close(master) if p.returncode == 0: result = json.loads(to_text(stdout, errors='surrogate_then_replace')) else: try: result = json.loads(to_text(stderr, errors='surrogate_then_replace')) except getattr(json.decoder, 'JSONDecodeError', ValueError): # JSONDecodeError only available on Python 3.5+ result = {'error': to_text(stderr, errors='surrogate_then_replace')} if 'messages' in result: for level, message in result['messages']: if level == 'log': display.display(message, log_only=True) elif level in ('debug', 'v', 'vv', 'vvv', 'vvvv', 'vvvvv', 'vvvvvv'): getattr(display, level)(message, host=play_context.remote_addr) else: if hasattr(display, level): getattr(display, level)(message) else: display.vvvv(message, host=play_context.remote_addr) if 'error' in result: if play_context.verbosity > 2: if result.get('exception'): msg = "The full traceback is:\n" + result['exception'] display.display(msg, color=C.COLOR_ERROR) raise AnsibleError(result['error']) return result['socket_path']
closed
ansible/ansible
https://github.com/ansible/ansible
76,007
delegate_to executing on wrong host
### Summary I have an inventory with two hosts, say H1 an H2. On H2, I execute a task with `delegate_to: H1`. It seems, however, that the task is eventually executed on H2. ### Issue Type Bug Report ### Component Name core ### Ansible Version ```console $ ansible --version ansible [core 2.11.5] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/mrks/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /home/mrks/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.9.7 (default, Aug 31 2021, 13:28:12) [GCC 11.1.0] jinja version = 3.0.2 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed <none> ``` ### OS / Environment Arch Linux ### Steps to Reproduce Playbook: ```yaml - hosts: H2 tasks: - shell: hostname register: foo delegate_to: H1 - debug: var: foo ``` ### Expected Results Expected foo.stdout = 'H1'. By the way, I am pretty sure that this worked for a long time. ### Actual Results ```console TASK [shell] ******************************************************************* changed: [H2 -> H1] TASK [debug] ******************************************************************* ok: [H2] => { "foo": { ... "stdout": "H2", "stdout_lines": [ "H2" ] } } ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76007
https://github.com/ansible/ansible/pull/76017
a5eadaf3fd5496bd1f100ff14badf5c4947185a2
be19863e44cc6b78706147b25489a73d7c8fbcb5
2021-10-11T19:53:38Z
python
2022-02-07T20:13:40Z
lib/ansible/plugins/connection/ssh.py
# Copyright (c) 2012, Michael DeHaan <[email protected]> # Copyright 2015 Abhijit Menon-Sen <[email protected]> # Copyright 2017 Toshio Kuratomi <[email protected]> # Copyright (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type DOCUMENTATION = ''' name: ssh short_description: connect via SSH client binary description: - This connection plugin allows Ansible to communicate to the target machines through normal SSH command line. - Ansible does not expose a channel to allow communication between the user and the SSH process to accept a password manually to decrypt an SSH key when using this connection plugin (which is the default). The use of C(ssh-agent) is highly recommended. author: ansible (@core) extends_documentation_fragment: - connection_pipelining version_added: historical notes: - Many options default to C(None) here but that only means we do not override the SSH tool's defaults and/or configuration. For example, if you specify the port in this plugin it will override any C(Port) entry in your C(.ssh/config). options: host: description: Hostname/IP to connect to. vars: - name: inventory_hostname - name: ansible_host - name: ansible_ssh_host - name: delegated_vars['ansible_host'] - name: delegated_vars['ansible_ssh_host'] host_key_checking: description: Determines if SSH should check host keys. default: True type: boolean ini: - section: defaults key: 'host_key_checking' - section: ssh_connection key: 'host_key_checking' version_added: '2.5' env: - name: ANSIBLE_HOST_KEY_CHECKING - name: ANSIBLE_SSH_HOST_KEY_CHECKING version_added: '2.5' vars: - name: ansible_host_key_checking version_added: '2.5' - name: ansible_ssh_host_key_checking version_added: '2.5' password: description: Authentication password for the C(remote_user). Can be supplied as CLI option. vars: - name: ansible_password - name: ansible_ssh_pass - name: ansible_ssh_password sshpass_prompt: description: - Password prompt that sshpass should search for. Supported by sshpass 1.06 and up. - Defaults to C(Enter PIN for) when pkcs11_provider is set. default: '' ini: - section: 'ssh_connection' key: 'sshpass_prompt' env: - name: ANSIBLE_SSHPASS_PROMPT vars: - name: ansible_sshpass_prompt version_added: '2.10' ssh_args: description: Arguments to pass to all SSH CLI tools. default: '-C -o ControlMaster=auto -o ControlPersist=60s' ini: - section: 'ssh_connection' key: 'ssh_args' env: - name: ANSIBLE_SSH_ARGS vars: - name: ansible_ssh_args version_added: '2.7' ssh_common_args: description: Common extra args for all SSH CLI tools. ini: - section: 'ssh_connection' key: 'ssh_common_args' version_added: '2.7' env: - name: ANSIBLE_SSH_COMMON_ARGS version_added: '2.7' vars: - name: ansible_ssh_common_args cli: - name: ssh_common_args default: '' ssh_executable: default: ssh description: - This defines the location of the SSH binary. It defaults to C(ssh) which will use the first SSH binary available in $PATH. - This option is usually not required, it might be useful when access to system SSH is restricted, or when using SSH wrappers to connect to remote hosts. env: [{name: ANSIBLE_SSH_EXECUTABLE}] ini: - {key: ssh_executable, section: ssh_connection} #const: ANSIBLE_SSH_EXECUTABLE version_added: "2.2" vars: - name: ansible_ssh_executable version_added: '2.7' sftp_executable: default: sftp description: - This defines the location of the sftp binary. It defaults to C(sftp) which will use the first binary available in $PATH. env: [{name: ANSIBLE_SFTP_EXECUTABLE}] ini: - {key: sftp_executable, section: ssh_connection} version_added: "2.6" vars: - name: ansible_sftp_executable version_added: '2.7' scp_executable: default: scp description: - This defines the location of the scp binary. It defaults to C(scp) which will use the first binary available in $PATH. env: [{name: ANSIBLE_SCP_EXECUTABLE}] ini: - {key: scp_executable, section: ssh_connection} version_added: "2.6" vars: - name: ansible_scp_executable version_added: '2.7' scp_extra_args: description: Extra exclusive to the C(scp) CLI vars: - name: ansible_scp_extra_args env: - name: ANSIBLE_SCP_EXTRA_ARGS version_added: '2.7' ini: - key: scp_extra_args section: ssh_connection version_added: '2.7' cli: - name: scp_extra_args default: '' sftp_extra_args: description: Extra exclusive to the C(sftp) CLI vars: - name: ansible_sftp_extra_args env: - name: ANSIBLE_SFTP_EXTRA_ARGS version_added: '2.7' ini: - key: sftp_extra_args section: ssh_connection version_added: '2.7' cli: - name: sftp_extra_args default: '' ssh_extra_args: description: Extra exclusive to the SSH CLI. vars: - name: ansible_ssh_extra_args env: - name: ANSIBLE_SSH_EXTRA_ARGS version_added: '2.7' ini: - key: ssh_extra_args section: ssh_connection version_added: '2.7' cli: - name: ssh_extra_args default: '' reconnection_retries: description: Number of attempts to connect. default: 0 type: integer env: - name: ANSIBLE_SSH_RETRIES ini: - section: connection key: retries - section: ssh_connection key: retries vars: - name: ansible_ssh_retries version_added: '2.7' port: description: Remote port to connect to. type: int ini: - section: defaults key: remote_port env: - name: ANSIBLE_REMOTE_PORT vars: - name: ansible_port - name: ansible_ssh_port keyword: - name: port remote_user: description: - User name with which to login to the remote server, normally set by the remote_user keyword. - If no user is supplied, Ansible will let the SSH client binary choose the user as it normally. ini: - section: defaults key: remote_user env: - name: ANSIBLE_REMOTE_USER vars: - name: ansible_user - name: ansible_ssh_user cli: - name: user keyword: - name: remote_user pipelining: env: - name: ANSIBLE_PIPELINING - name: ANSIBLE_SSH_PIPELINING ini: - section: defaults key: pipelining - section: connection key: pipelining - section: ssh_connection key: pipelining vars: - name: ansible_pipelining - name: ansible_ssh_pipelining private_key_file: description: - Path to private key file to use for authentication. ini: - section: defaults key: private_key_file env: - name: ANSIBLE_PRIVATE_KEY_FILE vars: - name: ansible_private_key_file - name: ansible_ssh_private_key_file cli: - name: private_key_file option: '--private-key' control_path: description: - This is the location to save SSH's ControlPath sockets, it uses SSH's variable substitution. - Since 2.3, if null (default), ansible will generate a unique hash. Use ``%(directory)s`` to indicate where to use the control dir path setting. - Before 2.3 it defaulted to ``control_path=%(directory)s/ansible-ssh-%%h-%%p-%%r``. - Be aware that this setting is ignored if C(-o ControlPath) is set in ssh args. env: - name: ANSIBLE_SSH_CONTROL_PATH ini: - key: control_path section: ssh_connection vars: - name: ansible_control_path version_added: '2.7' control_path_dir: default: ~/.ansible/cp description: - This sets the directory to use for ssh control path if the control path setting is null. - Also, provides the ``%(directory)s`` variable for the control path setting. env: - name: ANSIBLE_SSH_CONTROL_PATH_DIR ini: - section: ssh_connection key: control_path_dir vars: - name: ansible_control_path_dir version_added: '2.7' sftp_batch_mode: default: 'yes' description: 'TODO: write it' env: [{name: ANSIBLE_SFTP_BATCH_MODE}] ini: - {key: sftp_batch_mode, section: ssh_connection} type: bool vars: - name: ansible_sftp_batch_mode version_added: '2.7' ssh_transfer_method: description: - "Preferred method to use when transferring files over ssh" - Setting to 'smart' (default) will try them in order, until one succeeds or they all fail - Using 'piped' creates an ssh pipe with C(dd) on either side to copy the data choices: ['sftp', 'scp', 'piped', 'smart'] env: [{name: ANSIBLE_SSH_TRANSFER_METHOD}] ini: - {key: transfer_method, section: ssh_connection} vars: - name: ansible_ssh_transfer_method version_added: '2.12' scp_if_ssh: deprecated: why: In favor of the "ssh_transfer_method" option. version: "2.17" alternatives: ssh_transfer_method default: smart description: - "Preferred method to use when transferring files over SSH." - When set to I(smart), Ansible will try them until one succeeds or they all fail. - If set to I(True), it will force 'scp', if I(False) it will use 'sftp'. - This setting will overridden by ssh_transfer_method if set. env: [{name: ANSIBLE_SCP_IF_SSH}] ini: - {key: scp_if_ssh, section: ssh_connection} vars: - name: ansible_scp_if_ssh version_added: '2.7' use_tty: version_added: '2.5' default: 'yes' description: add -tt to ssh commands to force tty allocation. env: [{name: ANSIBLE_SSH_USETTY}] ini: - {key: usetty, section: ssh_connection} type: bool vars: - name: ansible_ssh_use_tty version_added: '2.7' timeout: default: 10 description: - This is the default amount of time we will wait while establishing an SSH connection. - It also controls how long we can wait to access reading the connection once established (select on the socket). env: - name: ANSIBLE_TIMEOUT - name: ANSIBLE_SSH_TIMEOUT version_added: '2.11' ini: - key: timeout section: defaults - key: timeout section: ssh_connection version_added: '2.11' vars: - name: ansible_ssh_timeout version_added: '2.11' cli: - name: timeout type: integer pkcs11_provider: version_added: '2.12' default: "" description: - "PKCS11 SmartCard provider such as opensc, example: /usr/local/lib/opensc-pkcs11.so" - Requires sshpass version 1.06+, sshpass must support the -P option. env: [{name: ANSIBLE_PKCS11_PROVIDER}] ini: - {key: pkcs11_provider, section: ssh_connection} vars: - name: ansible_ssh_pkcs11_provider ''' import errno import fcntl import hashlib import os import pty import re import shlex import subprocess import time from functools import wraps from ansible.errors import ( AnsibleAuthenticationFailure, AnsibleConnectionFailure, AnsibleError, AnsibleFileNotFound, ) from ansible.errors import AnsibleOptionsError from ansible.module_utils.compat import selectors from ansible.module_utils.six import PY3, text_type, binary_type from ansible.module_utils._text import to_bytes, to_native, to_text from ansible.module_utils.parsing.convert_bool import BOOLEANS, boolean from ansible.plugins.connection import ConnectionBase, BUFSIZE from ansible.plugins.shell.powershell import _parse_clixml from ansible.utils.display import Display from ansible.utils.path import unfrackpath, makedirs_safe display = Display() b_NOT_SSH_ERRORS = (b'Traceback (most recent call last):', # Python-2.6 when there's an exception # while invoking a script via -m b'PHP Parse error:', # Php always returns error 255 ) SSHPASS_AVAILABLE = None SSH_DEBUG = re.compile(r'^debug\d+: .*') class AnsibleControlPersistBrokenPipeError(AnsibleError): ''' ControlPersist broken pipe ''' pass def _handle_error(remaining_retries, command, return_tuple, no_log, host, display=display): # sshpass errors if command == b'sshpass': # Error 5 is invalid/incorrect password. Raise an exception to prevent retries from locking the account. if return_tuple[0] == 5: msg = 'Invalid/incorrect username/password. Skipping remaining {0} retries to prevent account lockout:'.format(remaining_retries) if remaining_retries <= 0: msg = 'Invalid/incorrect password:' if no_log: msg = '{0} <error censored due to no log>'.format(msg) else: msg = '{0} {1}'.format(msg, to_native(return_tuple[2]).rstrip()) raise AnsibleAuthenticationFailure(msg) # sshpass returns codes are 1-6. We handle 5 previously, so this catches other scenarios. # No exception is raised, so the connection is retried - except when attempting to use # sshpass_prompt with an sshpass that won't let us pass -P, in which case we fail loudly. elif return_tuple[0] in [1, 2, 3, 4, 6]: msg = 'sshpass error:' if no_log: msg = '{0} <error censored due to no log>'.format(msg) else: details = to_native(return_tuple[2]).rstrip() if "sshpass: invalid option -- 'P'" in details: details = 'Installed sshpass version does not support customized password prompts. ' \ 'Upgrade sshpass to use sshpass_prompt, or otherwise switch to ssh keys.' raise AnsibleError('{0} {1}'.format(msg, details)) msg = '{0} {1}'.format(msg, details) if return_tuple[0] == 255: SSH_ERROR = True for signature in b_NOT_SSH_ERRORS: if signature in return_tuple[1]: SSH_ERROR = False break if SSH_ERROR: msg = "Failed to connect to the host via ssh:" if no_log: msg = '{0} <error censored due to no log>'.format(msg) else: msg = '{0} {1}'.format(msg, to_native(return_tuple[2]).rstrip()) raise AnsibleConnectionFailure(msg) # For other errors, no exception is raised so the connection is retried and we only log the messages if 1 <= return_tuple[0] <= 254: msg = u"Failed to connect to the host via ssh:" if no_log: msg = u'{0} <error censored due to no log>'.format(msg) else: msg = u'{0} {1}'.format(msg, to_text(return_tuple[2]).rstrip()) display.vvv(msg, host=host) def _ssh_retry(func): """ Decorator to retry ssh/scp/sftp in the case of a connection failure Will retry if: * an exception is caught * ssh returns 255 Will not retry if * sshpass returns 5 (invalid password, to prevent account lockouts) * remaining_tries is < 2 * retries limit reached """ @wraps(func) def wrapped(self, *args, **kwargs): remaining_tries = int(self.get_option('reconnection_retries')) + 1 cmd_summary = u"%s..." % to_text(args[0]) conn_password = self.get_option('password') or self._play_context.password for attempt in range(remaining_tries): cmd = args[0] if attempt != 0 and conn_password and isinstance(cmd, list): # If this is a retry, the fd/pipe for sshpass is closed, and we need a new one self.sshpass_pipe = os.pipe() cmd[1] = b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict') try: try: return_tuple = func(self, *args, **kwargs) # TODO: this should come from task if self._play_context.no_log: display.vvv(u'rc=%s, stdout and stderr censored due to no log' % return_tuple[0], host=self.host) else: display.vvv(return_tuple, host=self.host) # 0 = success # 1-254 = remote command return code # 255 could be a failure from the ssh command itself except (AnsibleControlPersistBrokenPipeError): # Retry one more time because of the ControlPersist broken pipe (see #16731) cmd = args[0] if conn_password and isinstance(cmd, list): # This is a retry, so the fd/pipe for sshpass is closed, and we need a new one self.sshpass_pipe = os.pipe() cmd[1] = b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict') display.vvv(u"RETRYING BECAUSE OF CONTROLPERSIST BROKEN PIPE") return_tuple = func(self, *args, **kwargs) remaining_retries = remaining_tries - attempt - 1 _handle_error(remaining_retries, cmd[0], return_tuple, self._play_context.no_log, self.host) break # 5 = Invalid/incorrect password from sshpass except AnsibleAuthenticationFailure: # Raising this exception, which is subclassed from AnsibleConnectionFailure, prevents further retries raise except (AnsibleConnectionFailure, Exception) as e: if attempt == remaining_tries - 1: raise else: pause = 2 ** attempt - 1 if pause > 30: pause = 30 if isinstance(e, AnsibleConnectionFailure): msg = u"ssh_retry: attempt: %d, ssh return code is 255. cmd (%s), pausing for %d seconds" % (attempt + 1, cmd_summary, pause) else: msg = (u"ssh_retry: attempt: %d, caught exception(%s) from cmd (%s), " u"pausing for %d seconds" % (attempt + 1, to_text(e), cmd_summary, pause)) display.vv(msg, host=self.host) time.sleep(pause) continue return return_tuple return wrapped class Connection(ConnectionBase): ''' ssh based connections ''' transport = 'ssh' has_pipelining = True def __init__(self, *args, **kwargs): super(Connection, self).__init__(*args, **kwargs) # TODO: all should come from get_option(), but not might be set at this point yet self.host = self._play_context.remote_addr self.port = self._play_context.port self.user = self._play_context.remote_user self.control_path = None self.control_path_dir = None # Windows operates differently from a POSIX connection/shell plugin, # we need to set various properties to ensure SSH on Windows continues # to work if getattr(self._shell, "_IS_WINDOWS", False): self.has_native_async = True self.always_pipeline_modules = True self.module_implementation_preferences = ('.ps1', '.exe', '') self.allow_executable = False # The connection is created by running ssh/scp/sftp from the exec_command, # put_file, and fetch_file methods, so we don't need to do any connection # management here. def _connect(self): return self @staticmethod def _create_control_path(host, port, user, connection=None, pid=None): '''Make a hash for the controlpath based on con attributes''' pstring = '%s-%s-%s' % (host, port, user) if connection: pstring += '-%s' % connection if pid: pstring += '-%s' % to_text(pid) m = hashlib.sha1() m.update(to_bytes(pstring)) digest = m.hexdigest() cpath = '%(directory)s/' + digest[:10] return cpath @staticmethod def _sshpass_available(): global SSHPASS_AVAILABLE # We test once if sshpass is available, and remember the result. It # would be nice to use distutils.spawn.find_executable for this, but # distutils isn't always available; shutils.which() is Python3-only. if SSHPASS_AVAILABLE is None: try: p = subprocess.Popen(["sshpass"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) p.communicate() SSHPASS_AVAILABLE = True except OSError: SSHPASS_AVAILABLE = False return SSHPASS_AVAILABLE @staticmethod def _persistence_controls(b_command): ''' Takes a command array and scans it for ControlPersist and ControlPath settings and returns two booleans indicating whether either was found. This could be smarter, e.g. returning false if ControlPersist is 'no', but for now we do it simple way. ''' controlpersist = False controlpath = False for b_arg in (a.lower() for a in b_command): if b'controlpersist' in b_arg: controlpersist = True elif b'controlpath' in b_arg: controlpath = True return controlpersist, controlpath def _add_args(self, b_command, b_args, explanation): """ Adds arguments to the ssh command and displays a caller-supplied explanation of why. :arg b_command: A list containing the command to add the new arguments to. This list will be modified by this method. :arg b_args: An iterable of new arguments to add. This iterable is used more than once so it must be persistent (ie: a list is okay but a StringIO would not) :arg explanation: A text string containing explaining why the arguments were added. It will be displayed with a high enough verbosity. .. note:: This function does its work via side-effect. The b_command list has the new arguments appended. """ display.vvvvv(u'SSH: %s: (%s)' % (explanation, ')('.join(to_text(a) for a in b_args)), host=self.host) b_command += b_args def _build_command(self, binary, subsystem, *other_args): ''' Takes a executable (ssh, scp, sftp or wrapper) and optional extra arguments and returns the remote command wrapped in local ssh shell commands and ready for execution. :arg binary: actual executable to use to execute command. :arg subsystem: type of executable provided, ssh/sftp/scp, needed because wrappers for ssh might have diff names. :arg other_args: dict of, value pairs passed as arguments to the ssh binary ''' b_command = [] conn_password = self.get_option('password') or self._play_context.password # # First, the command to invoke # # If we want to use password authentication, we have to set up a pipe to # write the password to sshpass. pkcs11_provider = self.get_option("pkcs11_provider") if conn_password or pkcs11_provider: if not self._sshpass_available(): raise AnsibleError("to use the 'ssh' connection type with passwords or pkcs11_provider, you must install the sshpass program") if not conn_password and pkcs11_provider: raise AnsibleError("to use pkcs11_provider you must specify a password/pin") self.sshpass_pipe = os.pipe() b_command += [b'sshpass', b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')] password_prompt = self.get_option('sshpass_prompt') if not password_prompt and pkcs11_provider: # Set default password prompt for pkcs11_provider to make it clear its a PIN password_prompt = 'Enter PIN for ' if password_prompt: b_command += [b'-P', to_bytes(password_prompt, errors='surrogate_or_strict')] b_command += [to_bytes(binary, errors='surrogate_or_strict')] # # Next, additional arguments based on the configuration. # # pkcs11 mode allows the use of Smartcards or Yubikey devices if conn_password and pkcs11_provider: self._add_args(b_command, (b"-o", b"KbdInteractiveAuthentication=no", b"-o", b"PreferredAuthentications=publickey", b"-o", b"PasswordAuthentication=no", b'-o', to_bytes(u'PKCS11Provider=%s' % pkcs11_provider)), u'Enable pkcs11') # sftp batch mode allows us to correctly catch failed transfers, but can # be disabled if the client side doesn't support the option. However, # sftp batch mode does not prompt for passwords so it must be disabled # if not using controlpersist and using sshpass if subsystem == 'sftp' and self.get_option('sftp_batch_mode'): if conn_password: b_args = [b'-o', b'BatchMode=no'] self._add_args(b_command, b_args, u'disable batch mode for sshpass') b_command += [b'-b', b'-'] if self._play_context.verbosity > 3: b_command.append(b'-vvv') # Next, we add ssh_args ssh_args = self.get_option('ssh_args') if ssh_args: b_args = [to_bytes(a, errors='surrogate_or_strict') for a in self._split_ssh_args(ssh_args)] self._add_args(b_command, b_args, u"ansible.cfg set ssh_args") # Now we add various arguments that have their own specific settings defined in docs above. if self.get_option('host_key_checking') is False: b_args = (b"-o", b"StrictHostKeyChecking=no") self._add_args(b_command, b_args, u"ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled") self.port = self.get_option('port') if self.port is not None: b_args = (b"-o", b"Port=" + to_bytes(self.port, nonstring='simplerepr', errors='surrogate_or_strict')) self._add_args(b_command, b_args, u"ANSIBLE_REMOTE_PORT/remote_port/ansible_port set") key = self.get_option('private_key_file') if key: b_args = (b"-o", b'IdentityFile="' + to_bytes(os.path.expanduser(key), errors='surrogate_or_strict') + b'"') self._add_args(b_command, b_args, u"ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set") if not conn_password: self._add_args( b_command, ( b"-o", b"KbdInteractiveAuthentication=no", b"-o", b"PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey", b"-o", b"PasswordAuthentication=no" ), u"ansible_password/ansible_ssh_password not set" ) self.user = self.get_option('remote_user') if self.user: self._add_args( b_command, (b"-o", b'User="%s"' % to_bytes(self.user, errors='surrogate_or_strict')), u"ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set" ) timeout = self.get_option('timeout') self._add_args( b_command, (b"-o", b"ConnectTimeout=" + to_bytes(timeout, errors='surrogate_or_strict', nonstring='simplerepr')), u"ANSIBLE_TIMEOUT/timeout set" ) # Add in any common or binary-specific arguments from the PlayContext # (i.e. inventory or task settings or overrides on the command line). for opt in (u'ssh_common_args', u'{0}_extra_args'.format(subsystem)): attr = self.get_option(opt) if attr is not None: b_args = [to_bytes(a, errors='surrogate_or_strict') for a in self._split_ssh_args(attr)] self._add_args(b_command, b_args, u"Set %s" % opt) # Check if ControlPersist is enabled and add a ControlPath if one hasn't # already been set. controlpersist, controlpath = self._persistence_controls(b_command) if controlpersist: self._persistent = True if not controlpath: self.control_path_dir = self.get_option('control_path_dir') cpdir = unfrackpath(self.control_path_dir) b_cpdir = to_bytes(cpdir, errors='surrogate_or_strict') # The directory must exist and be writable. makedirs_safe(b_cpdir, 0o700) if not os.access(b_cpdir, os.W_OK): raise AnsibleError("Cannot write to ControlPath %s" % to_native(cpdir)) self.control_path = self.get_option('control_path') if not self.control_path: self.control_path = self._create_control_path( self.host, self.port, self.user ) b_args = (b"-o", b'ControlPath="%s"' % to_bytes(self.control_path % dict(directory=cpdir), errors='surrogate_or_strict')) self._add_args(b_command, b_args, u"found only ControlPersist; added ControlPath") # Finally, we add any caller-supplied extras. if other_args: b_command += [to_bytes(a) for a in other_args] return b_command def _send_initial_data(self, fh, in_data, ssh_process): ''' Writes initial data to the stdin filehandle of the subprocess and closes it. (The handle must be closed; otherwise, for example, "sftp -b -" will just hang forever waiting for more commands.) ''' display.debug(u'Sending initial data') try: fh.write(to_bytes(in_data)) fh.close() except (OSError, IOError) as e: # The ssh connection may have already terminated at this point, with a more useful error # Only raise AnsibleConnectionFailure if the ssh process is still alive time.sleep(0.001) ssh_process.poll() if getattr(ssh_process, 'returncode', None) is None: raise AnsibleConnectionFailure( 'Data could not be sent to remote host "%s". Make sure this host can be reached ' 'over ssh: %s' % (self.host, to_native(e)), orig_exc=e ) display.debug(u'Sent initial data (%d bytes)' % len(in_data)) # Used by _run() to kill processes on failures @staticmethod def _terminate_process(p): """ Terminate a process, ignoring errors """ try: p.terminate() except (OSError, IOError): pass # This is separate from _run() because we need to do the same thing for stdout # and stderr. def _examine_output(self, source, state, b_chunk, sudoable): ''' Takes a string, extracts complete lines from it, tests to see if they are a prompt, error message, etc., and sets appropriate flags in self. Prompt and success lines are removed. Returns the processed (i.e. possibly-edited) output and the unprocessed remainder (to be processed with the next chunk) as strings. ''' output = [] for b_line in b_chunk.splitlines(True): display_line = to_text(b_line).rstrip('\r\n') suppress_output = False # display.debug("Examining line (source=%s, state=%s): '%s'" % (source, state, display_line)) if SSH_DEBUG.match(display_line): # skip lines from ssh debug output to avoid false matches pass elif self.become.expect_prompt() and self.become.check_password_prompt(b_line): display.debug(u"become_prompt: (source=%s, state=%s): '%s'" % (source, state, display_line)) self._flags['become_prompt'] = True suppress_output = True elif self.become.success and self.become.check_success(b_line): display.debug(u"become_success: (source=%s, state=%s): '%s'" % (source, state, display_line)) self._flags['become_success'] = True suppress_output = True elif sudoable and self.become.check_incorrect_password(b_line): display.debug(u"become_error: (source=%s, state=%s): '%s'" % (source, state, display_line)) self._flags['become_error'] = True elif sudoable and self.become.check_missing_password(b_line): display.debug(u"become_nopasswd_error: (source=%s, state=%s): '%s'" % (source, state, display_line)) self._flags['become_nopasswd_error'] = True if not suppress_output: output.append(b_line) # The chunk we read was most likely a series of complete lines, but just # in case the last line was incomplete (and not a prompt, which we would # have removed from the output), we retain it to be processed with the # next chunk. remainder = b'' if output and not output[-1].endswith(b'\n'): remainder = output[-1] output = output[:-1] return b''.join(output), remainder def _bare_run(self, cmd, in_data, sudoable=True, checkrc=True): ''' Starts the command and communicates with it until it ends. ''' # We don't use _shell.quote as this is run on the controller and independent from the shell plugin chosen display_cmd = u' '.join(shlex.quote(to_text(c)) for c in cmd) display.vvv(u'SSH: EXEC {0}'.format(display_cmd), host=self.host) # Start the given command. If we don't need to pipeline data, we can try # to use a pseudo-tty (ssh will have been invoked with -tt). If we are # pipelining data, or can't create a pty, we fall back to using plain # old pipes. p = None if isinstance(cmd, (text_type, binary_type)): cmd = to_bytes(cmd) else: cmd = list(map(to_bytes, cmd)) conn_password = self.get_option('password') or self._play_context.password if not in_data: try: # Make sure stdin is a proper pty to avoid tcgetattr errors master, slave = pty.openpty() if PY3 and conn_password: # pylint: disable=unexpected-keyword-arg p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe) else: p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdin = os.fdopen(master, 'wb', 0) os.close(slave) except (OSError, IOError): p = None if not p: try: if PY3 and conn_password: # pylint: disable=unexpected-keyword-arg p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe) else: p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdin = p.stdin except (OSError, IOError) as e: raise AnsibleError('Unable to execute ssh command line on a controller due to: %s' % to_native(e)) # If we are using SSH password authentication, write the password into # the pipe we opened in _build_command. if conn_password: os.close(self.sshpass_pipe[0]) try: os.write(self.sshpass_pipe[1], to_bytes(conn_password) + b'\n') except OSError as e: # Ignore broken pipe errors if the sshpass process has exited. if e.errno != errno.EPIPE or p.poll() is None: raise os.close(self.sshpass_pipe[1]) # # SSH state machine # # Now we read and accumulate output from the running process until it # exits. Depending on the circumstances, we may also need to write an # escalation password and/or pipelined input to the process. states = [ 'awaiting_prompt', 'awaiting_escalation', 'ready_to_send', 'awaiting_exit' ] # Are we requesting privilege escalation? Right now, we may be invoked # to execute sftp/scp with sudoable=True, but we can request escalation # only when using ssh. Otherwise we can send initial data straightaway. state = states.index('ready_to_send') if to_bytes(self.get_option('ssh_executable')) in cmd and sudoable: prompt = getattr(self.become, 'prompt', None) if prompt: # We're requesting escalation with a password, so we have to # wait for a password prompt. state = states.index('awaiting_prompt') display.debug(u'Initial state: %s: %s' % (states[state], to_text(prompt))) elif self.become and self.become.success: # We're requesting escalation without a password, so we have to # detect success/failure before sending any initial data. state = states.index('awaiting_escalation') display.debug(u'Initial state: %s: %s' % (states[state], to_text(self.become.success))) # We store accumulated stdout and stderr output from the process here, # but strip any privilege escalation prompt/confirmation lines first. # Output is accumulated into tmp_*, complete lines are extracted into # an array, then checked and removed or copied to stdout or stderr. We # set any flags based on examining the output in self._flags. b_stdout = b_stderr = b'' b_tmp_stdout = b_tmp_stderr = b'' self._flags = dict( become_prompt=False, become_success=False, become_error=False, become_nopasswd_error=False ) # select timeout should be longer than the connect timeout, otherwise # they will race each other when we can't connect, and the connect # timeout usually fails timeout = 2 + self.get_option('timeout') for fd in (p.stdout, p.stderr): fcntl.fcntl(fd, fcntl.F_SETFL, fcntl.fcntl(fd, fcntl.F_GETFL) | os.O_NONBLOCK) # TODO: bcoca would like to use SelectSelector() when open # select is faster when filehandles is low and we only ever handle 1. selector = selectors.DefaultSelector() selector.register(p.stdout, selectors.EVENT_READ) selector.register(p.stderr, selectors.EVENT_READ) # If we can send initial data without waiting for anything, we do so # before we start polling if states[state] == 'ready_to_send' and in_data: self._send_initial_data(stdin, in_data, p) state += 1 try: while True: poll = p.poll() events = selector.select(timeout) # We pay attention to timeouts only while negotiating a prompt. if not events: # We timed out if state <= states.index('awaiting_escalation'): # If the process has already exited, then it's not really a # timeout; we'll let the normal error handling deal with it. if poll is not None: break self._terminate_process(p) raise AnsibleError('Timeout (%ds) waiting for privilege escalation prompt: %s' % (timeout, to_native(b_stdout))) # Read whatever output is available on stdout and stderr, and stop # listening to the pipe if it's been closed. for key, event in events: if key.fileobj == p.stdout: b_chunk = p.stdout.read() if b_chunk == b'': # stdout has been closed, stop watching it selector.unregister(p.stdout) # When ssh has ControlMaster (+ControlPath/Persist) enabled, the # first connection goes into the background and we never see EOF # on stderr. If we see EOF on stdout, lower the select timeout # to reduce the time wasted selecting on stderr if we observe # that the process has not yet existed after this EOF. Otherwise # we may spend a long timeout period waiting for an EOF that is # not going to arrive until the persisted connection closes. timeout = 1 b_tmp_stdout += b_chunk display.debug(u"stdout chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk))) elif key.fileobj == p.stderr: b_chunk = p.stderr.read() if b_chunk == b'': # stderr has been closed, stop watching it selector.unregister(p.stderr) b_tmp_stderr += b_chunk display.debug("stderr chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk))) # We examine the output line-by-line until we have negotiated any # privilege escalation prompt and subsequent success/error message. # Afterwards, we can accumulate output without looking at it. if state < states.index('ready_to_send'): if b_tmp_stdout: b_output, b_unprocessed = self._examine_output('stdout', states[state], b_tmp_stdout, sudoable) b_stdout += b_output b_tmp_stdout = b_unprocessed if b_tmp_stderr: b_output, b_unprocessed = self._examine_output('stderr', states[state], b_tmp_stderr, sudoable) b_stderr += b_output b_tmp_stderr = b_unprocessed else: b_stdout += b_tmp_stdout b_stderr += b_tmp_stderr b_tmp_stdout = b_tmp_stderr = b'' # If we see a privilege escalation prompt, we send the password. # (If we're expecting a prompt but the escalation succeeds, we # didn't need the password and can carry on regardless.) if states[state] == 'awaiting_prompt': if self._flags['become_prompt']: display.debug(u'Sending become_password in response to prompt') become_pass = self.become.get_option('become_pass', playcontext=self._play_context) stdin.write(to_bytes(become_pass, errors='surrogate_or_strict') + b'\n') # On python3 stdin is a BufferedWriter, and we don't have a guarantee # that the write will happen without a flush stdin.flush() self._flags['become_prompt'] = False state += 1 elif self._flags['become_success']: state += 1 # We've requested escalation (with or without a password), now we # wait for an error message or a successful escalation. if states[state] == 'awaiting_escalation': if self._flags['become_success']: display.vvv(u'Escalation succeeded') self._flags['become_success'] = False state += 1 elif self._flags['become_error']: display.vvv(u'Escalation failed') self._terminate_process(p) self._flags['become_error'] = False raise AnsibleError('Incorrect %s password' % self.become.name) elif self._flags['become_nopasswd_error']: display.vvv(u'Escalation requires password') self._terminate_process(p) self._flags['become_nopasswd_error'] = False raise AnsibleError('Missing %s password' % self.become.name) elif self._flags['become_prompt']: # This shouldn't happen, because we should see the "Sorry, # try again" message first. display.vvv(u'Escalation prompt repeated') self._terminate_process(p) self._flags['become_prompt'] = False raise AnsibleError('Incorrect %s password' % self.become.name) # Once we're sure that the privilege escalation prompt, if any, has # been dealt with, we can send any initial data and start waiting # for output. if states[state] == 'ready_to_send': if in_data: self._send_initial_data(stdin, in_data, p) state += 1 # Now we're awaiting_exit: has the child process exited? If it has, # and we've read all available output from it, we're done. if poll is not None: if not selector.get_map() or not events: break # We should not see further writes to the stdout/stderr file # descriptors after the process has closed, set the select # timeout to gather any last writes we may have missed. timeout = 0 continue # If the process has not yet exited, but we've already read EOF from # its stdout and stderr (and thus no longer watching any file # descriptors), we can just wait for it to exit. elif not selector.get_map(): p.wait() break # Otherwise there may still be outstanding data to read. finally: selector.close() # close stdin, stdout, and stderr after process is terminated and # stdout/stderr are read completely (see also issues #848, #64768). stdin.close() p.stdout.close() p.stderr.close() if self.get_option('host_key_checking'): if cmd[0] == b"sshpass" and p.returncode == 6: raise AnsibleError('Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support ' 'this. Please add this host\'s fingerprint to your known_hosts file to manage this host.') controlpersisterror = b'Bad configuration option: ControlPersist' in b_stderr or b'unknown configuration option: ControlPersist' in b_stderr if p.returncode != 0 and controlpersisterror: raise AnsibleError('using -c ssh on certain older ssh versions may not support ControlPersist, set ANSIBLE_SSH_ARGS="" ' '(or ssh_args in [ssh_connection] section of the config file) before running again') # If we find a broken pipe because of ControlPersist timeout expiring (see #16731), # we raise a special exception so that we can retry a connection. controlpersist_broken_pipe = b'mux_client_hello_exchange: write packet: Broken pipe' in b_stderr if p.returncode == 255: additional = to_native(b_stderr) if controlpersist_broken_pipe: raise AnsibleControlPersistBrokenPipeError('Data could not be sent because of ControlPersist broken pipe: %s' % additional) elif in_data and checkrc: raise AnsibleConnectionFailure('Data could not be sent to remote host "%s". Make sure this host can be reached over ssh: %s' % (self.host, additional)) return (p.returncode, b_stdout, b_stderr) @_ssh_retry def _run(self, cmd, in_data, sudoable=True, checkrc=True): """Wrapper around _bare_run that retries the connection """ return self._bare_run(cmd, in_data, sudoable=sudoable, checkrc=checkrc) @_ssh_retry def _file_transport_command(self, in_path, out_path, sftp_action): # scp and sftp require square brackets for IPv6 addresses, but # accept them for hostnames and IPv4 addresses too. host = '[%s]' % self.host smart_methods = ['sftp', 'scp', 'piped'] # Windows does not support dd so we cannot use the piped method if getattr(self._shell, "_IS_WINDOWS", False): smart_methods.remove('piped') # Transfer methods to try methods = [] # Use the transfer_method option if set, otherwise use scp_if_ssh ssh_transfer_method = self.get_option('ssh_transfer_method') scp_if_ssh = self.get_option('scp_if_ssh') if ssh_transfer_method is None and scp_if_ssh == 'smart': ssh_transfer_method = 'smart' if ssh_transfer_method is not None: if ssh_transfer_method == 'smart': methods = smart_methods else: methods = [ssh_transfer_method] else: # since this can be a non-bool now, we need to handle it correctly if not isinstance(scp_if_ssh, bool): scp_if_ssh = scp_if_ssh.lower() if scp_if_ssh in BOOLEANS: scp_if_ssh = boolean(scp_if_ssh, strict=False) elif scp_if_ssh != 'smart': raise AnsibleOptionsError('scp_if_ssh needs to be one of [smart|True|False]') if scp_if_ssh == 'smart': methods = smart_methods elif scp_if_ssh is True: methods = ['scp'] else: methods = ['sftp'] for method in methods: returncode = stdout = stderr = None if method == 'sftp': cmd = self._build_command(self.get_option('sftp_executable'), 'sftp', to_bytes(host)) in_data = u"{0} {1} {2}\n".format(sftp_action, shlex.quote(in_path), shlex.quote(out_path)) in_data = to_bytes(in_data, nonstring='passthru') (returncode, stdout, stderr) = self._bare_run(cmd, in_data, checkrc=False) elif method == 'scp': scp = self.get_option('scp_executable') if sftp_action == 'get': cmd = self._build_command(scp, 'scp', u'{0}:{1}'.format(host, self._shell.quote(in_path)), out_path) else: cmd = self._build_command(scp, 'scp', in_path, u'{0}:{1}'.format(host, self._shell.quote(out_path))) in_data = None (returncode, stdout, stderr) = self._bare_run(cmd, in_data, checkrc=False) elif method == 'piped': if sftp_action == 'get': # we pass sudoable=False to disable pty allocation, which # would end up mixing stdout/stderr and screwing with newlines (returncode, stdout, stderr) = self.exec_command('dd if=%s bs=%s' % (in_path, BUFSIZE), sudoable=False) with open(to_bytes(out_path, errors='surrogate_or_strict'), 'wb+') as out_file: out_file.write(stdout) else: with open(to_bytes(in_path, errors='surrogate_or_strict'), 'rb') as f: in_data = to_bytes(f.read(), nonstring='passthru') if not in_data: count = ' count=0' else: count = '' (returncode, stdout, stderr) = self.exec_command('dd of=%s bs=%s%s' % (out_path, BUFSIZE, count), in_data=in_data, sudoable=False) # Check the return code and rollover to next method if failed if returncode == 0: return (returncode, stdout, stderr) else: # If not in smart mode, the data will be printed by the raise below if len(methods) > 1: display.warning(u'%s transfer mechanism failed on %s. Use ANSIBLE_DEBUG=1 to see detailed information' % (method, host)) display.debug(u'%s' % to_text(stdout)) display.debug(u'%s' % to_text(stderr)) if returncode == 255: raise AnsibleConnectionFailure("Failed to connect to the host via %s: %s" % (method, to_native(stderr))) else: raise AnsibleError("failed to transfer file to %s %s:\n%s\n%s" % (to_native(in_path), to_native(out_path), to_native(stdout), to_native(stderr))) def _escape_win_path(self, path): """ converts a Windows path to one that's supported by SFTP and SCP """ # If using a root path then we need to start with / prefix = "" if re.match(r'^\w{1}:', path): prefix = "/" # Convert all '\' to '/' return "%s%s" % (prefix, path.replace("\\", "/")) # # Main public methods # def exec_command(self, cmd, in_data=None, sudoable=True): ''' run a command on the remote host ''' super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable) display.vvv(u"ESTABLISH SSH CONNECTION FOR USER: {0}".format(self.user), host=self.host) if getattr(self._shell, "_IS_WINDOWS", False): # Become method 'runas' is done in the wrapper that is executed, # need to disable sudoable so the bare_run is not waiting for a # prompt that will not occur sudoable = False # Make sure our first command is to set the console encoding to # utf-8, this must be done via chcp to get utf-8 (65001) cmd_parts = ["chcp.com", "65001", self._shell._SHELL_REDIRECT_ALLNULL, self._shell._SHELL_AND] cmd_parts.extend(self._shell._encode_script(cmd, as_list=True, strict_mode=False, preserve_rc=False)) cmd = ' '.join(cmd_parts) # we can only use tty when we are not pipelining the modules. piping # data into /usr/bin/python inside a tty automatically invokes the # python interactive-mode but the modules are not compatible with the # interactive-mode ("unexpected indent" mainly because of empty lines) ssh_executable = self.get_option('ssh_executable') # -tt can cause various issues in some environments so allow the user # to disable it as a troubleshooting method. use_tty = self.get_option('use_tty') if not in_data and sudoable and use_tty: args = ('-tt', self.host, cmd) else: args = (self.host, cmd) cmd = self._build_command(ssh_executable, 'ssh', *args) (returncode, stdout, stderr) = self._run(cmd, in_data, sudoable=sudoable) # When running on Windows, stderr may contain CLIXML encoded output if getattr(self._shell, "_IS_WINDOWS", False) and stderr.startswith(b"#< CLIXML"): stderr = _parse_clixml(stderr) return (returncode, stdout, stderr) def put_file(self, in_path, out_path): ''' transfer a file from local to remote ''' super(Connection, self).put_file(in_path, out_path) display.vvv(u"PUT {0} TO {1}".format(in_path, out_path), host=self.host) if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')): raise AnsibleFileNotFound("file or module does not exist: {0}".format(to_native(in_path))) if getattr(self._shell, "_IS_WINDOWS", False): out_path = self._escape_win_path(out_path) return self._file_transport_command(in_path, out_path, 'put') def fetch_file(self, in_path, out_path): ''' fetch a file from remote to local ''' super(Connection, self).fetch_file(in_path, out_path) display.vvv(u"FETCH {0} TO {1}".format(in_path, out_path), host=self.host) # need to add / if path is rooted if getattr(self._shell, "_IS_WINDOWS", False): in_path = self._escape_win_path(in_path) return self._file_transport_command(in_path, out_path, 'get') def reset(self): run_reset = False # If we have a persistent ssh connection (ControlPersist), we can ask it to stop listening. # only run the reset if the ControlPath already exists or if it isn't configured and ControlPersist is set # 'check' will determine this. cmd = self._build_command(self.get_option('ssh_executable'), 'ssh', '-O', 'check', self.host) display.vvv(u'sending connection check: %s' % to_text(cmd)) p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() status_code = p.wait() if status_code != 0: display.vvv(u"No connection to reset: %s" % to_text(stderr)) else: run_reset = True if run_reset: cmd = self._build_command(self.get_option('ssh_executable'), 'ssh', '-O', 'stop', self.host) display.vvv(u'sending connection stop: %s' % to_text(cmd)) p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() status_code = p.wait() if status_code != 0: display.warning(u"Failed to reset connection:%s" % to_text(stderr)) self.close() def close(self): self._connected = False
closed
ansible/ansible
https://github.com/ansible/ansible
76,007
delegate_to executing on wrong host
### Summary I have an inventory with two hosts, say H1 an H2. On H2, I execute a task with `delegate_to: H1`. It seems, however, that the task is eventually executed on H2. ### Issue Type Bug Report ### Component Name core ### Ansible Version ```console $ ansible --version ansible [core 2.11.5] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/mrks/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /home/mrks/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.9.7 (default, Aug 31 2021, 13:28:12) [GCC 11.1.0] jinja version = 3.0.2 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed <none> ``` ### OS / Environment Arch Linux ### Steps to Reproduce Playbook: ```yaml - hosts: H2 tasks: - shell: hostname register: foo delegate_to: H1 - debug: var: foo ``` ### Expected Results Expected foo.stdout = 'H1'. By the way, I am pretty sure that this worked for a long time. ### Actual Results ```console TASK [shell] ******************************************************************* changed: [H2 -> H1] TASK [debug] ******************************************************************* ok: [H2] => { "foo": { ... "stdout": "H2", "stdout_lines": [ "H2" ] } } ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76007
https://github.com/ansible/ansible/pull/76017
a5eadaf3fd5496bd1f100ff14badf5c4947185a2
be19863e44cc6b78706147b25489a73d7c8fbcb5
2021-10-11T19:53:38Z
python
2022-02-07T20:13:40Z
test/integration/targets/delegate_to/delegate_facts_loop.yml
- hosts: localhost gather_facts: no tasks: - set_fact: test: 123 delegate_to: "{{ item }}" delegate_facts: true when: test is not defined loop: "{{ groups['all'] | difference(['localhost']) }}" - name: ensure we didnt create it on current host assert: that: - test is undefined - name: ensure facts get created assert: that: - "'test' in hostvars[item]" - hostvars[item]['test'] == 123 loop: "{{ groups['all'] | difference(['localhost']) }}"
closed
ansible/ansible
https://github.com/ansible/ansible
76,007
delegate_to executing on wrong host
### Summary I have an inventory with two hosts, say H1 an H2. On H2, I execute a task with `delegate_to: H1`. It seems, however, that the task is eventually executed on H2. ### Issue Type Bug Report ### Component Name core ### Ansible Version ```console $ ansible --version ansible [core 2.11.5] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/mrks/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /home/mrks/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.9.7 (default, Aug 31 2021, 13:28:12) [GCC 11.1.0] jinja version = 3.0.2 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed <none> ``` ### OS / Environment Arch Linux ### Steps to Reproduce Playbook: ```yaml - hosts: H2 tasks: - shell: hostname register: foo delegate_to: H1 - debug: var: foo ``` ### Expected Results Expected foo.stdout = 'H1'. By the way, I am pretty sure that this worked for a long time. ### Actual Results ```console TASK [shell] ******************************************************************* changed: [H2 -> H1] TASK [debug] ******************************************************************* ok: [H2] => { "foo": { ... "stdout": "H2", "stdout_lines": [ "H2" ] } } ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76007
https://github.com/ansible/ansible/pull/76017
a5eadaf3fd5496bd1f100ff14badf5c4947185a2
be19863e44cc6b78706147b25489a73d7c8fbcb5
2021-10-11T19:53:38Z
python
2022-02-07T20:13:40Z
test/integration/targets/delegate_to/inventory
[local] testhost ansible_connection=local testhost2 ansible_connection=local testhost3 ansible_ssh_host=127.0.0.3 testhost4 ansible_ssh_host=127.0.0.4 testhost5 ansible_connection=fakelocal [all:vars] ansible_python_interpreter="{{ ansible_playbook_python }}"
closed
ansible/ansible
https://github.com/ansible/ansible
76,007
delegate_to executing on wrong host
### Summary I have an inventory with two hosts, say H1 an H2. On H2, I execute a task with `delegate_to: H1`. It seems, however, that the task is eventually executed on H2. ### Issue Type Bug Report ### Component Name core ### Ansible Version ```console $ ansible --version ansible [core 2.11.5] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/mrks/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /home/mrks/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.9.7 (default, Aug 31 2021, 13:28:12) [GCC 11.1.0] jinja version = 3.0.2 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed <none> ``` ### OS / Environment Arch Linux ### Steps to Reproduce Playbook: ```yaml - hosts: H2 tasks: - shell: hostname register: foo delegate_to: H1 - debug: var: foo ``` ### Expected Results Expected foo.stdout = 'H1'. By the way, I am pretty sure that this worked for a long time. ### Actual Results ```console TASK [shell] ******************************************************************* changed: [H2 -> H1] TASK [debug] ******************************************************************* ok: [H2] => { "foo": { ... "stdout": "H2", "stdout_lines": [ "H2" ] } } ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76007
https://github.com/ansible/ansible/pull/76017
a5eadaf3fd5496bd1f100ff14badf5c4947185a2
be19863e44cc6b78706147b25489a73d7c8fbcb5
2021-10-11T19:53:38Z
python
2022-02-07T20:13:40Z
test/integration/targets/delegate_to/runme.sh
#!/usr/bin/env bash set -eux platform="$(uname)" function setup() { if [[ "${platform}" == "FreeBSD" ]] || [[ "${platform}" == "Darwin" ]]; then ifconfig lo0 existing=$(ifconfig lo0 | grep '^[[:blank:]]inet 127\.0\.0\. ' || true) echo "${existing}" for i in 3 4 254; do ip="127.0.0.${i}" if [[ "${existing}" != *"${ip}"* ]]; then ifconfig lo0 alias "${ip}" up fi done ifconfig lo0 fi } function teardown() { if [[ "${platform}" == "FreeBSD" ]] || [[ "${platform}" == "Darwin" ]]; then for i in 3 4 254; do ip="127.0.0.${i}" if [[ "${existing}" != *"${ip}"* ]]; then ifconfig lo0 -alias "${ip}" fi done ifconfig lo0 fi } setup trap teardown EXIT ANSIBLE_SSH_ARGS='-C -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null' \ ANSIBLE_HOST_KEY_CHECKING=false ansible-playbook test_delegate_to.yml -i inventory -v "$@" # this test is not doing what it says it does, also relies on var that should not be available #ansible-playbook test_loop_control.yml -v "$@" ansible-playbook test_delegate_to_loop_randomness.yml -v "$@" ansible-playbook delegate_and_nolog.yml -i inventory -v "$@" ansible-playbook delegate_facts_block.yml -i inventory -v "$@" ansible-playbook test_delegate_to_loop_caching.yml -i inventory -v "$@" # ensure we are using correct settings when delegating ANSIBLE_TIMEOUT=3 ansible-playbook delegate_vars_hanldling.yml -i inventory -v "$@" ansible-playbook has_hostvars.yml -i inventory -v "$@" # test ansible_x_interpreter # python source virtualenv.sh ( cd "${OUTPUT_DIR}"/venv/bin ln -s python firstpython ln -s python secondpython ) ansible-playbook verify_interpreter.yml -i inventory_interpreters -v "$@" ansible-playbook discovery_applied.yml -i inventory -v "$@" ansible-playbook resolve_vars.yml -i inventory -v "$@" ansible-playbook test_delegate_to_lookup_context.yml -i inventory -v "$@" ansible-playbook delegate_local_from_root.yml -i inventory -v "$@" -e 'ansible_user=root' ansible-playbook delegate_with_fact_from_delegate_host.yml "$@" ansible-playbook delegate_facts_loop.yml -i inventory -v "$@"
closed
ansible/ansible
https://github.com/ansible/ansible
76,007
delegate_to executing on wrong host
### Summary I have an inventory with two hosts, say H1 an H2. On H2, I execute a task with `delegate_to: H1`. It seems, however, that the task is eventually executed on H2. ### Issue Type Bug Report ### Component Name core ### Ansible Version ```console $ ansible --version ansible [core 2.11.5] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/mrks/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /home/mrks/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.9.7 (default, Aug 31 2021, 13:28:12) [GCC 11.1.0] jinja version = 3.0.2 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed <none> ``` ### OS / Environment Arch Linux ### Steps to Reproduce Playbook: ```yaml - hosts: H2 tasks: - shell: hostname register: foo delegate_to: H1 - debug: var: foo ``` ### Expected Results Expected foo.stdout = 'H1'. By the way, I am pretty sure that this worked for a long time. ### Actual Results ```console TASK [shell] ******************************************************************* changed: [H2 -> H1] TASK [debug] ******************************************************************* ok: [H2] => { "foo": { ... "stdout": "H2", "stdout_lines": [ "H2" ] } } ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76007
https://github.com/ansible/ansible/pull/76017
a5eadaf3fd5496bd1f100ff14badf5c4947185a2
be19863e44cc6b78706147b25489a73d7c8fbcb5
2021-10-11T19:53:38Z
python
2022-02-07T20:13:40Z
test/integration/targets/delegate_to/test_delegate_to.yml
- hosts: testhost3 vars: - template_role: ./roles/test_template - output_dir: "{{ playbook_dir }}" - templated_var: foo - templated_dict: { 'hello': 'world' } tasks: - name: Test no delegate_to setup: register: setup_results - assert: that: - '"127.0.0.3" in setup_results.ansible_facts.ansible_env["SSH_CONNECTION"]' - name: Test delegate_to with host in inventory setup: register: setup_results delegate_to: testhost4 - debug: var=setup_results - assert: that: - '"127.0.0.4" in setup_results.ansible_facts.ansible_env["SSH_CONNECTION"]' - name: Test delegate_to with host not in inventory setup: register: setup_results delegate_to: 127.0.0.254 - assert: that: - '"127.0.0.254" in setup_results.ansible_facts.ansible_env["SSH_CONNECTION"]' # # Smoketest some other modules do not error as a canary # - name: Test file works with delegate_to and a host in inventory file: path={{ output_dir }}/foo.txt mode=0644 state=touch delegate_to: testhost4 - name: Test file works with delegate_to and a host not in inventory file: path={{ output_dir }}/tmp.txt mode=0644 state=touch delegate_to: 127.0.0.254 - name: Test template works with delegate_to and a host in inventory template: src={{ template_role }}/templates/foo.j2 dest={{ output_dir }}/foo.txt delegate_to: testhost4 - name: Test template works with delegate_to and a host not in inventory template: src={{ template_role }}/templates/foo.j2 dest={{ output_dir }}/foo.txt delegate_to: 127.0.0.254 - name: remove test file file: path={{ output_dir }}/foo.txt state=absent - name: remove test file file: path={{ output_dir }}/tmp.txt state=absent
closed
ansible/ansible
https://github.com/ansible/ansible
69,846
Git module: don't encourage MITM attacks
<!--- Verify first that your feature was not already discussed on GitHub --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY The Git module encourages use of `StrictHostKeyChecking=no` with SSH, which makes the connection vulnerable to a MITM attack. ##### ISSUE TYPE OpenSSH 7.5+ (2017-03-20) added a safer option which automatically accepts the key on first use but will not allow a MITM on subsequent connections: `StrictHostKeyChecking=accept-new` https://man.openbsd.org/ssh_config#StrictHostKeyChecking It would be great if Ansible encouraged use of that mode by default in the documentation and to implement features such as the Git module's [`accept_hostkey`](https://github.com/ansible/ansible/blob/323d2adfcce9a53e906c2c4a40f6ac5fb32a0e68/lib/ansible/modules/git.py#L1103-L1108) feature. ##### ANSIBLE VERSION ```paste below N/A ``` ##### COMPONENT NAME * `/modules/source_control/git.py`
https://github.com/ansible/ansible/issues/69846
https://github.com/ansible/ansible/pull/73404
be19863e44cc6b78706147b25489a73d7c8fbcb5
b493c590bcee9b64e8ae02c17d4fde2331e0598b
2020-06-02T21:33:23Z
python
2022-02-07T21:05:16Z
changelogs/fragments/git_fixes.yml
closed
ansible/ansible
https://github.com/ansible/ansible
69,846
Git module: don't encourage MITM attacks
<!--- Verify first that your feature was not already discussed on GitHub --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY The Git module encourages use of `StrictHostKeyChecking=no` with SSH, which makes the connection vulnerable to a MITM attack. ##### ISSUE TYPE OpenSSH 7.5+ (2017-03-20) added a safer option which automatically accepts the key on first use but will not allow a MITM on subsequent connections: `StrictHostKeyChecking=accept-new` https://man.openbsd.org/ssh_config#StrictHostKeyChecking It would be great if Ansible encouraged use of that mode by default in the documentation and to implement features such as the Git module's [`accept_hostkey`](https://github.com/ansible/ansible/blob/323d2adfcce9a53e906c2c4a40f6ac5fb32a0e68/lib/ansible/modules/git.py#L1103-L1108) feature. ##### ANSIBLE VERSION ```paste below N/A ``` ##### COMPONENT NAME * `/modules/source_control/git.py`
https://github.com/ansible/ansible/issues/69846
https://github.com/ansible/ansible/pull/73404
be19863e44cc6b78706147b25489a73d7c8fbcb5
b493c590bcee9b64e8ae02c17d4fde2331e0598b
2020-06-02T21:33:23Z
python
2022-02-07T21:05:16Z
lib/ansible/modules/git.py
# -*- coding: utf-8 -*- # (c) 2012, Michael DeHaan <[email protected]> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type DOCUMENTATION = ''' --- module: git author: - "Ansible Core Team" - "Michael DeHaan" version_added: "0.0.1" short_description: Deploy software (or files) from git checkouts description: - Manage I(git) checkouts of repositories to deploy files or software. extends_documentation_fragment: action_common_attributes options: repo: description: - git, SSH, or HTTP(S) protocol address of the git repository. type: str required: true aliases: [ name ] dest: description: - The path of where the repository should be checked out. This is equivalent to C(git clone [repo_url] [directory]). The repository named in I(repo) is not appended to this path and the destination directory must be empty. This parameter is required, unless I(clone) is set to C(no). type: path required: true version: description: - What version of the repository to check out. This can be the literal string C(HEAD), a branch name, a tag name. It can also be a I(SHA-1) hash, in which case I(refspec) needs to be specified if the given revision is not already available. type: str default: "HEAD" accept_hostkey: description: - If C(yes), ensure that "-o StrictHostKeyChecking=no" is present as an ssh option. type: bool default: 'no' version_added: "1.5" accept_newhostkey: description: - As of OpenSSH 7.5, "-o StrictHostKeyChecking=accept-new" can be used which is safer and will only accepts host keys which are not present or are the same. if C(yes), ensure that "-o StrictHostKeyChecking=accept-new" is present as an ssh option. type: bool default: 'no' version_added: "2.12" ssh_opts: description: - Creates a wrapper script and exports the path as GIT_SSH which git then automatically uses to override ssh arguments. An example value could be "-o StrictHostKeyChecking=no" (although this particular option is better set by I(accept_hostkey)). type: str version_added: "1.5" key_file: description: - Specify an optional private key file path, on the target host, to use for the checkout. type: path version_added: "1.5" reference: description: - Reference repository (see "git clone --reference ..."). version_added: "1.4" remote: description: - Name of the remote. type: str default: "origin" refspec: description: - Add an additional refspec to be fetched. If version is set to a I(SHA-1) not reachable from any branch or tag, this option may be necessary to specify the ref containing the I(SHA-1). Uses the same syntax as the C(git fetch) command. An example value could be "refs/meta/config". type: str version_added: "1.9" force: description: - If C(yes), any modified files in the working repository will be discarded. Prior to 0.7, this was always C(yes) and could not be disabled. Prior to 1.9, the default was C(yes). type: bool default: 'no' version_added: "0.7" depth: description: - Create a shallow clone with a history truncated to the specified number or revisions. The minimum possible value is C(1), otherwise ignored. Needs I(git>=1.9.1) to work correctly. type: int version_added: "1.2" clone: description: - If C(no), do not clone the repository even if it does not exist locally. type: bool default: 'yes' version_added: "1.9" update: description: - If C(no), do not retrieve new revisions from the origin repository. - Operations like archive will work on the existing (old) repository and might not respond to changes to the options version or remote. type: bool default: 'yes' version_added: "1.2" executable: description: - Path to git executable to use. If not supplied, the normal mechanism for resolving binary paths will be used. type: path version_added: "1.4" bare: description: - If C(yes), repository will be created as a bare repo, otherwise it will be a standard repo with a workspace. type: bool default: 'no' version_added: "1.4" umask: description: - The umask to set before doing any checkouts, or any other repository maintenance. type: raw version_added: "2.2" recursive: description: - If C(no), repository will be cloned without the --recursive option, skipping sub-modules. type: bool default: 'yes' version_added: "1.6" single_branch: description: - Clone only the history leading to the tip of the specified revision. type: bool default: 'no' version_added: '2.11' track_submodules: description: - If C(yes), submodules will track the latest commit on their master branch (or other branch specified in .gitmodules). If C(no), submodules will be kept at the revision specified by the main project. This is equivalent to specifying the --remote flag to git submodule update. type: bool default: 'no' version_added: "1.8" verify_commit: description: - If C(yes), when cloning or checking out a I(version) verify the signature of a GPG signed commit. This requires git version>=2.1.0 to be installed. The commit MUST be signed and the public key MUST be present in the GPG keyring. type: bool default: 'no' version_added: "2.0" archive: description: - Specify archive file path with extension. If specified, creates an archive file of the specified format containing the tree structure for the source tree. Allowed archive formats ["zip", "tar.gz", "tar", "tgz"]. - This will clone and perform git archive from local directory as not all git servers support git archive. type: path version_added: "2.4" archive_prefix: description: - Specify a prefix to add to each file path in archive. Requires I(archive) to be specified. version_added: "2.10" type: str separate_git_dir: description: - The path to place the cloned repository. If specified, Git repository can be separated from working tree. type: path version_added: "2.7" gpg_whitelist: description: - A list of trusted GPG fingerprints to compare to the fingerprint of the GPG-signed commit. - Only used when I(verify_commit=yes). - Use of this feature requires Git 2.6+ due to its reliance on git's C(--raw) flag to C(verify-commit) and C(verify-tag). type: list elements: str default: [] version_added: "2.9" requirements: - git>=1.7.1 (the command line tool) attributes: check_mode: support: full diff_mode: support: full platform: platforms: posix notes: - "If the task seems to be hanging, first verify remote host is in C(known_hosts). SSH will prompt user to authorize the first contact with a remote host. To avoid this prompt, one solution is to use the option accept_hostkey. Another solution is to add the remote host public key in C(/etc/ssh/ssh_known_hosts) before calling the git module, with the following command: ssh-keyscan -H remote_host.com >> /etc/ssh/ssh_known_hosts." ''' EXAMPLES = ''' - name: Git checkout ansible.builtin.git: repo: 'https://foosball.example.org/path/to/repo.git' dest: /srv/checkout version: release-0.22 - name: Read-write git checkout from github ansible.builtin.git: repo: [email protected]:mylogin/hello.git dest: /home/mylogin/hello - name: Just ensuring the repo checkout exists ansible.builtin.git: repo: 'https://foosball.example.org/path/to/repo.git' dest: /srv/checkout update: no - name: Just get information about the repository whether or not it has already been cloned locally ansible.builtin.git: repo: 'https://foosball.example.org/path/to/repo.git' dest: /srv/checkout clone: no update: no - name: Checkout a github repo and use refspec to fetch all pull requests ansible.builtin.git: repo: https://github.com/ansible/ansible-examples.git dest: /src/ansible-examples refspec: '+refs/pull/*:refs/heads/*' - name: Create git archive from repo ansible.builtin.git: repo: https://github.com/ansible/ansible-examples.git dest: /src/ansible-examples archive: /tmp/ansible-examples.zip - name: Clone a repo with separate git directory ansible.builtin.git: repo: https://github.com/ansible/ansible-examples.git dest: /src/ansible-examples separate_git_dir: /src/ansible-examples.git - name: Example clone of a single branch ansible.builtin.git: repo: https://github.com/ansible/ansible-examples.git dest: /src/ansible-examples single_branch: yes version: master - name: Avoid hanging when http(s) password is missing ansible.builtin.git: repo: https://github.com/ansible/could-be-a-private-repo dest: /src/from-private-repo environment: GIT_TERMINAL_PROMPT: 0 # reports "terminal prompts disabled" on missing password # or GIT_ASKPASS: /bin/true # for git before version 2.3.0, reports "Authentication failed" on missing password ''' RETURN = ''' after: description: Last commit revision of the repository retrieved during the update. returned: success type: str sample: 4c020102a9cd6fe908c9a4a326a38f972f63a903 before: description: Commit revision before the repository was updated, "null" for new repository. returned: success type: str sample: 67c04ebe40a003bda0efb34eacfb93b0cafdf628 remote_url_changed: description: Contains True or False whether or not the remote URL was changed. returned: success type: bool sample: True warnings: description: List of warnings if requested features were not available due to a too old git version. returned: error type: str sample: git version is too old to fully support the depth argument. Falling back to full checkouts. git_dir_now: description: Contains the new path of .git directory if it is changed. returned: success type: str sample: /path/to/new/git/dir git_dir_before: description: Contains the original path of .git directory if it is changed. returned: success type: str sample: /path/to/old/git/dir ''' import filecmp import os import re import shlex import stat import sys import shutil import tempfile from ansible.module_utils.compat.version import LooseVersion from ansible.module_utils._text import to_native, to_text from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.common.locale import get_best_parsable_locale from ansible.module_utils.common.process import get_bin_path from ansible.module_utils.six import b, string_types def relocate_repo(module, result, repo_dir, old_repo_dir, worktree_dir): if os.path.exists(repo_dir): module.fail_json(msg='Separate-git-dir path %s already exists.' % repo_dir) if worktree_dir: dot_git_file_path = os.path.join(worktree_dir, '.git') try: shutil.move(old_repo_dir, repo_dir) with open(dot_git_file_path, 'w') as dot_git_file: dot_git_file.write('gitdir: %s' % repo_dir) result['git_dir_before'] = old_repo_dir result['git_dir_now'] = repo_dir except (IOError, OSError) as err: # if we already moved the .git dir, roll it back if os.path.exists(repo_dir): shutil.move(repo_dir, old_repo_dir) module.fail_json(msg=u'Unable to move git dir. %s' % to_text(err)) def head_splitter(headfile, remote, module=None, fail_on_error=False): '''Extract the head reference''' # https://github.com/ansible/ansible-modules-core/pull/907 res = None if os.path.exists(headfile): rawdata = None try: f = open(headfile, 'r') rawdata = f.readline() f.close() except Exception: if fail_on_error and module: module.fail_json(msg="Unable to read %s" % headfile) if rawdata: try: rawdata = rawdata.replace('refs/remotes/%s' % remote, '', 1) refparts = rawdata.split(' ') newref = refparts[-1] nrefparts = newref.split('/', 2) res = nrefparts[-1].rstrip('\n') except Exception: if fail_on_error and module: module.fail_json(msg="Unable to split head from '%s'" % rawdata) return res def unfrackgitpath(path): if path is None: return None # copied from ansible.utils.path return os.path.normpath(os.path.realpath(os.path.expanduser(os.path.expandvars(path)))) def get_submodule_update_params(module, git_path, cwd): # or: git submodule [--quiet] update [--init] [-N|--no-fetch] # [-f|--force] [--rebase] [--reference <repository>] [--merge] # [--recursive] [--] [<path>...] params = [] # run a bad submodule command to get valid params cmd = "%s submodule update --help" % (git_path) rc, stdout, stderr = module.run_command(cmd, cwd=cwd) lines = stderr.split('\n') update_line = None for line in lines: if 'git submodule [--quiet] update ' in line: update_line = line if update_line: update_line = update_line.replace('[', '') update_line = update_line.replace(']', '') update_line = update_line.replace('|', ' ') parts = shlex.split(update_line) for part in parts: if part.startswith('--'): part = part.replace('--', '') params.append(part) return params def write_ssh_wrapper(module_tmpdir): try: # make sure we have full permission to the module_dir, which # may not be the case if we're sudo'ing to a non-root user if os.access(module_tmpdir, os.W_OK | os.R_OK | os.X_OK): fd, wrapper_path = tempfile.mkstemp(prefix=module_tmpdir + '/') else: raise OSError except (IOError, OSError): fd, wrapper_path = tempfile.mkstemp() fh = os.fdopen(fd, 'w+b') template = b("""#!/bin/sh if [ -z "$GIT_SSH_OPTS" ]; then BASEOPTS="" else BASEOPTS=$GIT_SSH_OPTS fi # Let ssh fail rather than prompt BASEOPTS="$BASEOPTS -o BatchMode=yes" if [ -z "$GIT_KEY" ]; then ssh $BASEOPTS "$@" else ssh -i "$GIT_KEY" -o IdentitiesOnly=yes $BASEOPTS "$@" fi """) fh.write(template) fh.close() st = os.stat(wrapper_path) os.chmod(wrapper_path, st.st_mode | stat.S_IEXEC) return wrapper_path def set_git_ssh(ssh_wrapper, key_file, ssh_opts): if os.environ.get("GIT_SSH"): del os.environ["GIT_SSH"] os.environ["GIT_SSH"] = ssh_wrapper if os.environ.get("GIT_KEY"): del os.environ["GIT_KEY"] if key_file: os.environ["GIT_KEY"] = key_file if os.environ.get("GIT_SSH_OPTS"): del os.environ["GIT_SSH_OPTS"] if ssh_opts: os.environ["GIT_SSH_OPTS"] = ssh_opts def get_version(module, git_path, dest, ref="HEAD"): ''' samples the version of the git repo ''' cmd = "%s rev-parse %s" % (git_path, ref) rc, stdout, stderr = module.run_command(cmd, cwd=dest) sha = to_native(stdout).rstrip('\n') return sha def ssh_supports_acceptnewhostkey(module): try: ssh_path = get_bin_path('ssh') except ValueError as err: module.fail_json( msg='Remote host is missing ssh command, so you cannot ' 'use acceptnewhostkey option.', details=to_text(err)) supports_acceptnewhostkey = True cmd = [ssh_path, '-o', 'StrictHostKeyChecking=accept-new', '-V'] rc, stdout, stderr = module.run_command(cmd) if rc != 0: supports_acceptnewhostkey = False return supports_acceptnewhostkey def get_submodule_versions(git_path, module, dest, version='HEAD'): cmd = [git_path, 'submodule', 'foreach', git_path, 'rev-parse', version] (rc, out, err) = module.run_command(cmd, cwd=dest) if rc != 0: module.fail_json( msg='Unable to determine hashes of submodules', stdout=out, stderr=err, rc=rc) submodules = {} subm_name = None for line in out.splitlines(): if line.startswith("Entering '"): subm_name = line[10:-1] elif len(line.strip()) == 40: if subm_name is None: module.fail_json() submodules[subm_name] = line.strip() subm_name = None else: module.fail_json(msg='Unable to parse submodule hash line: %s' % line.strip()) if subm_name is not None: module.fail_json(msg='Unable to find hash for submodule: %s' % subm_name) return submodules def clone(git_path, module, repo, dest, remote, depth, version, bare, reference, refspec, git_version_used, verify_commit, separate_git_dir, result, gpg_whitelist, single_branch): ''' makes a new git repo if it does not already exist ''' dest_dirname = os.path.dirname(dest) try: os.makedirs(dest_dirname) except Exception: pass cmd = [git_path, 'clone'] if bare: cmd.append('--bare') else: cmd.extend(['--origin', remote]) is_branch_or_tag = is_remote_branch(git_path, module, dest, repo, version) or is_remote_tag(git_path, module, dest, repo, version) if depth: if version == 'HEAD' or refspec: cmd.extend(['--depth', str(depth)]) elif is_branch_or_tag: cmd.extend(['--depth', str(depth)]) cmd.extend(['--branch', version]) else: # only use depth if the remote object is branch or tag (i.e. fetchable) module.warn("Ignoring depth argument. " "Shallow clones are only available for " "HEAD, branches, tags or in combination with refspec.") if reference: cmd.extend(['--reference', str(reference)]) if single_branch: if git_version_used is None: module.fail_json(msg='Cannot find git executable at %s' % git_path) if git_version_used < LooseVersion('1.7.10'): module.warn("git version '%s' is too old to use 'single-branch'. Ignoring." % git_version_used) else: cmd.append("--single-branch") if is_branch_or_tag: cmd.extend(['--branch', version]) needs_separate_git_dir_fallback = False if separate_git_dir: if git_version_used is None: module.fail_json(msg='Cannot find git executable at %s' % git_path) if git_version_used < LooseVersion('1.7.5'): # git before 1.7.5 doesn't have separate-git-dir argument, do fallback needs_separate_git_dir_fallback = True else: cmd.append('--separate-git-dir=%s' % separate_git_dir) cmd.extend([repo, dest]) module.run_command(cmd, check_rc=True, cwd=dest_dirname) if needs_separate_git_dir_fallback: relocate_repo(module, result, separate_git_dir, os.path.join(dest, ".git"), dest) if bare and remote != 'origin': module.run_command([git_path, 'remote', 'add', remote, repo], check_rc=True, cwd=dest) if refspec: cmd = [git_path, 'fetch'] if depth: cmd.extend(['--depth', str(depth)]) cmd.extend([remote, refspec]) module.run_command(cmd, check_rc=True, cwd=dest) if verify_commit: verify_commit_sign(git_path, module, dest, version, gpg_whitelist) def has_local_mods(module, git_path, dest, bare): if bare: return False cmd = "%s status --porcelain" % (git_path) rc, stdout, stderr = module.run_command(cmd, cwd=dest) lines = stdout.splitlines() lines = list(filter(lambda c: not re.search('^\\?\\?.*$', c), lines)) return len(lines) > 0 def reset(git_path, module, dest): ''' Resets the index and working tree to HEAD. Discards any changes to tracked files in working tree since that commit. ''' cmd = "%s reset --hard HEAD" % (git_path,) return module.run_command(cmd, check_rc=True, cwd=dest) def get_diff(module, git_path, dest, repo, remote, depth, bare, before, after): ''' Return the difference between 2 versions ''' if before is None: return {'prepared': '>> Newly checked out %s' % after} elif before != after: # Ensure we have the object we are referring to during git diff ! git_version_used = git_version(git_path, module) fetch(git_path, module, repo, dest, after, remote, depth, bare, '', git_version_used) cmd = '%s diff %s %s' % (git_path, before, after) (rc, out, err) = module.run_command(cmd, cwd=dest) if rc == 0 and out: return {'prepared': out} elif rc == 0: return {'prepared': '>> No visual differences between %s and %s' % (before, after)} elif err: return {'prepared': '>> Failed to get proper diff between %s and %s:\n>> %s' % (before, after, err)} else: return {'prepared': '>> Failed to get proper diff between %s and %s' % (before, after)} return {} def get_remote_head(git_path, module, dest, version, remote, bare): cloning = False cwd = None tag = False if remote == module.params['repo']: cloning = True elif remote == 'file://' + os.path.expanduser(module.params['repo']): cloning = True else: cwd = dest if version == 'HEAD': if cloning: # cloning the repo, just get the remote's HEAD version cmd = '%s ls-remote %s -h HEAD' % (git_path, remote) else: head_branch = get_head_branch(git_path, module, dest, remote, bare) cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, head_branch) elif is_remote_branch(git_path, module, dest, remote, version): cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, version) elif is_remote_tag(git_path, module, dest, remote, version): tag = True cmd = '%s ls-remote %s -t refs/tags/%s*' % (git_path, remote, version) else: # appears to be a sha1. return as-is since it appears # cannot check for a specific sha1 on remote return version (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=cwd) if len(out) < 1: module.fail_json(msg="Could not determine remote revision for %s" % version, stdout=out, stderr=err, rc=rc) out = to_native(out) if tag: # Find the dereferenced tag if this is an annotated tag. for tag in out.split('\n'): if tag.endswith(version + '^{}'): out = tag break elif tag.endswith(version): out = tag rev = out.split()[0] return rev def is_remote_tag(git_path, module, dest, remote, version): cmd = '%s ls-remote %s -t refs/tags/%s' % (git_path, remote, version) (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest) if to_native(version, errors='surrogate_or_strict') in out: return True else: return False def get_branches(git_path, module, dest): branches = [] cmd = '%s branch --no-color -a' % (git_path,) (rc, out, err) = module.run_command(cmd, cwd=dest) if rc != 0: module.fail_json(msg="Could not determine branch data - received %s" % out, stdout=out, stderr=err) for line in out.split('\n'): if line.strip(): branches.append(line.strip()) return branches def get_annotated_tags(git_path, module, dest): tags = [] cmd = [git_path, 'for-each-ref', 'refs/tags/', '--format', '%(objecttype):%(refname:short)'] (rc, out, err) = module.run_command(cmd, cwd=dest) if rc != 0: module.fail_json(msg="Could not determine tag data - received %s" % out, stdout=out, stderr=err) for line in to_native(out).split('\n'): if line.strip(): tagtype, tagname = line.strip().split(':') if tagtype == 'tag': tags.append(tagname) return tags def is_remote_branch(git_path, module, dest, remote, version): cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, version) (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest) if to_native(version, errors='surrogate_or_strict') in out: return True else: return False def is_local_branch(git_path, module, dest, branch): branches = get_branches(git_path, module, dest) lbranch = '%s' % branch if lbranch in branches: return True elif '* %s' % branch in branches: return True else: return False def is_not_a_branch(git_path, module, dest): branches = get_branches(git_path, module, dest) for branch in branches: if branch.startswith('* ') and ('no branch' in branch or 'detached from' in branch or 'detached at' in branch): return True return False def get_repo_path(dest, bare): if bare: repo_path = dest else: repo_path = os.path.join(dest, '.git') # Check if the .git is a file. If it is a file, it means that the repository is in external directory respective to the working copy (e.g. we are in a # submodule structure). if os.path.isfile(repo_path): with open(repo_path, 'r') as gitfile: data = gitfile.read() ref_prefix, gitdir = data.rstrip().split('gitdir: ', 1) if ref_prefix: raise ValueError('.git file has invalid git dir reference format') # There is a possibility the .git file to have an absolute path. if os.path.isabs(gitdir): repo_path = gitdir else: # Use original destination directory with data from .git file. repo_path = os.path.join(dest, gitdir) if not os.path.isdir(repo_path): raise ValueError('%s is not a directory' % repo_path) return repo_path def get_head_branch(git_path, module, dest, remote, bare=False): ''' Determine what branch HEAD is associated with. This is partly taken from lib/ansible/utils/__init__.py. It finds the correct path to .git/HEAD and reads from that file the branch that HEAD is associated with. In the case of a detached HEAD, this will look up the branch in .git/refs/remotes/<remote>/HEAD. ''' try: repo_path = get_repo_path(dest, bare) except (IOError, ValueError) as err: # No repo path found """``.git`` file does not have a valid format for detached Git dir.""" module.fail_json( msg='Current repo does not have a valid reference to a ' 'separate Git dir or it refers to the invalid path', details=to_text(err), ) # Read .git/HEAD for the name of the branch. # If we're in a detached HEAD state, look up the branch associated with # the remote HEAD in .git/refs/remotes/<remote>/HEAD headfile = os.path.join(repo_path, "HEAD") if is_not_a_branch(git_path, module, dest): headfile = os.path.join(repo_path, 'refs', 'remotes', remote, 'HEAD') branch = head_splitter(headfile, remote, module=module, fail_on_error=True) return branch def get_remote_url(git_path, module, dest, remote): '''Return URL of remote source for repo.''' command = [git_path, 'ls-remote', '--get-url', remote] (rc, out, err) = module.run_command(command, cwd=dest) if rc != 0: # There was an issue getting remote URL, most likely # command is not available in this version of Git. return None return to_native(out).rstrip('\n') def set_remote_url(git_path, module, repo, dest, remote): ''' updates repo from remote sources ''' # Return if remote URL isn't changing. remote_url = get_remote_url(git_path, module, dest, remote) if remote_url == repo or unfrackgitpath(remote_url) == unfrackgitpath(repo): return False command = [git_path, 'remote', 'set-url', remote, repo] (rc, out, err) = module.run_command(command, cwd=dest) if rc != 0: label = "set a new url %s for %s" % (repo, remote) module.fail_json(msg="Failed to %s: %s %s" % (label, out, err)) # Return False if remote_url is None to maintain previous behavior # for Git versions prior to 1.7.5 that lack required functionality. return remote_url is not None def fetch(git_path, module, repo, dest, version, remote, depth, bare, refspec, git_version_used, force=False): ''' updates repo from remote sources ''' set_remote_url(git_path, module, repo, dest, remote) commands = [] fetch_str = 'download remote objects and refs' fetch_cmd = [git_path, 'fetch'] refspecs = [] if depth: # try to find the minimal set of refs we need to fetch to get a # successful checkout currenthead = get_head_branch(git_path, module, dest, remote) if refspec: refspecs.append(refspec) elif version == 'HEAD': refspecs.append(currenthead) elif is_remote_branch(git_path, module, dest, repo, version): if currenthead != version: # this workaround is only needed for older git versions # 1.8.3 is broken, 1.9.x works # ensure that remote branch is available as both local and remote ref refspecs.append('+refs/heads/%s:refs/heads/%s' % (version, version)) refspecs.append('+refs/heads/%s:refs/remotes/%s/%s' % (version, remote, version)) elif is_remote_tag(git_path, module, dest, repo, version): refspecs.append('+refs/tags/' + version + ':refs/tags/' + version) if refspecs: # if refspecs is empty, i.e. version is neither heads nor tags # assume it is a version hash # fall back to a full clone, otherwise we might not be able to checkout # version fetch_cmd.extend(['--depth', str(depth)]) if not depth or not refspecs: # don't try to be minimalistic but do a full clone # also do this if depth is given, but version is something that can't be fetched directly if bare: refspecs = ['+refs/heads/*:refs/heads/*', '+refs/tags/*:refs/tags/*'] else: # ensure all tags are fetched if git_version_used >= LooseVersion('1.9'): fetch_cmd.append('--tags') else: # old git versions have a bug in --tags that prevents updating existing tags commands.append((fetch_str, fetch_cmd + [remote])) refspecs = ['+refs/tags/*:refs/tags/*'] if refspec: refspecs.append(refspec) if force: fetch_cmd.append('--force') fetch_cmd.extend([remote]) commands.append((fetch_str, fetch_cmd + refspecs)) for (label, command) in commands: (rc, out, err) = module.run_command(command, cwd=dest) if rc != 0: module.fail_json(msg="Failed to %s: %s %s" % (label, out, err), cmd=command) def submodules_fetch(git_path, module, remote, track_submodules, dest): changed = False if not os.path.exists(os.path.join(dest, '.gitmodules')): # no submodules return changed gitmodules_file = open(os.path.join(dest, '.gitmodules'), 'r') for line in gitmodules_file: # Check for new submodules if not changed and line.strip().startswith('path'): path = line.split('=', 1)[1].strip() # Check that dest/path/.git exists if not os.path.exists(os.path.join(dest, path, '.git')): changed = True # Check for updates to existing modules if not changed: # Fetch updates begin = get_submodule_versions(git_path, module, dest) cmd = [git_path, 'submodule', 'foreach', git_path, 'fetch'] (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest) if rc != 0: module.fail_json(msg="Failed to fetch submodules: %s" % out + err) if track_submodules: # Compare against submodule HEAD # FIXME: determine this from .gitmodules version = 'master' after = get_submodule_versions(git_path, module, dest, '%s/%s' % (remote, version)) if begin != after: changed = True else: # Compare against the superproject's expectation cmd = [git_path, 'submodule', 'status'] (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest) if rc != 0: module.fail_json(msg='Failed to retrieve submodule status: %s' % out + err) for line in out.splitlines(): if line[0] != ' ': changed = True break return changed def submodule_update(git_path, module, dest, track_submodules, force=False): ''' init and update any submodules ''' # get the valid submodule params params = get_submodule_update_params(module, git_path, dest) # skip submodule commands if .gitmodules is not present if not os.path.exists(os.path.join(dest, '.gitmodules')): return (0, '', '') cmd = [git_path, 'submodule', 'sync'] (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest) if 'remote' in params and track_submodules: cmd = [git_path, 'submodule', 'update', '--init', '--recursive', '--remote'] else: cmd = [git_path, 'submodule', 'update', '--init', '--recursive'] if force: cmd.append('--force') (rc, out, err) = module.run_command(cmd, cwd=dest) if rc != 0: module.fail_json(msg="Failed to init/update submodules: %s" % out + err) return (rc, out, err) def set_remote_branch(git_path, module, dest, remote, version, depth): """set refs for the remote branch version This assumes the branch does not yet exist locally and is therefore also not checked out. Can't use git remote set-branches, as it is not available in git 1.7.1 (centos6) """ branchref = "+refs/heads/%s:refs/heads/%s" % (version, version) branchref += ' +refs/heads/%s:refs/remotes/%s/%s' % (version, remote, version) cmd = "%s fetch --depth=%s %s %s" % (git_path, depth, remote, branchref) (rc, out, err) = module.run_command(cmd, cwd=dest) if rc != 0: module.fail_json(msg="Failed to fetch branch from remote: %s" % version, stdout=out, stderr=err, rc=rc) def switch_version(git_path, module, dest, remote, version, verify_commit, depth, gpg_whitelist): cmd = '' if version == 'HEAD': branch = get_head_branch(git_path, module, dest, remote) (rc, out, err) = module.run_command("%s checkout --force %s" % (git_path, branch), cwd=dest) if rc != 0: module.fail_json(msg="Failed to checkout branch %s" % branch, stdout=out, stderr=err, rc=rc) cmd = "%s reset --hard %s/%s --" % (git_path, remote, branch) else: # FIXME check for local_branch first, should have been fetched already if is_remote_branch(git_path, module, dest, remote, version): if depth and not is_local_branch(git_path, module, dest, version): # git clone --depth implies --single-branch, which makes # the checkout fail if the version changes # fetch the remote branch, to be able to check it out next set_remote_branch(git_path, module, dest, remote, version, depth) if not is_local_branch(git_path, module, dest, version): cmd = "%s checkout --track -b %s %s/%s" % (git_path, version, remote, version) else: (rc, out, err) = module.run_command("%s checkout --force %s" % (git_path, version), cwd=dest) if rc != 0: module.fail_json(msg="Failed to checkout branch %s" % version, stdout=out, stderr=err, rc=rc) cmd = "%s reset --hard %s/%s" % (git_path, remote, version) else: cmd = "%s checkout --force %s" % (git_path, version) (rc, out1, err1) = module.run_command(cmd, cwd=dest) if rc != 0: if version != 'HEAD': module.fail_json(msg="Failed to checkout %s" % (version), stdout=out1, stderr=err1, rc=rc, cmd=cmd) else: module.fail_json(msg="Failed to checkout branch %s" % (branch), stdout=out1, stderr=err1, rc=rc, cmd=cmd) if verify_commit: verify_commit_sign(git_path, module, dest, version, gpg_whitelist) return (rc, out1, err1) def verify_commit_sign(git_path, module, dest, version, gpg_whitelist): if version in get_annotated_tags(git_path, module, dest): git_sub = "verify-tag" else: git_sub = "verify-commit" cmd = "%s %s %s" % (git_path, git_sub, version) if gpg_whitelist: cmd += " --raw" (rc, out, err) = module.run_command(cmd, cwd=dest) if rc != 0: module.fail_json(msg='Failed to verify GPG signature of commit/tag "%s"' % version, stdout=out, stderr=err, rc=rc) if gpg_whitelist: fingerprint = get_gpg_fingerprint(err) if fingerprint not in gpg_whitelist: module.fail_json(msg='The gpg_whitelist does not include the public key "%s" for this commit' % fingerprint, stdout=out, stderr=err, rc=rc) return (rc, out, err) def get_gpg_fingerprint(output): """Return a fingerprint of the primary key. Ref: https://git.gnupg.org/cgi-bin/gitweb.cgi?p=gnupg.git;a=blob;f=doc/DETAILS;hb=HEAD#l482 """ for line in output.splitlines(): data = line.split() if data[1] != 'VALIDSIG': continue # if signed with a subkey, this contains the primary key fingerprint data_id = 11 if len(data) == 11 else 2 return data[data_id] def git_version(git_path, module): """return the installed version of git""" cmd = "%s --version" % git_path (rc, out, err) = module.run_command(cmd) if rc != 0: # one could fail_json here, but the version info is not that important, # so let's try to fail only on actual git commands return None rematch = re.search('git version (.*)$', to_native(out)) if not rematch: return None return LooseVersion(rematch.groups()[0]) def git_archive(git_path, module, dest, archive, archive_fmt, archive_prefix, version): """ Create git archive in given source directory """ cmd = [git_path, 'archive', '--format', archive_fmt, '--output', archive, version] if archive_prefix is not None: cmd.insert(-1, '--prefix') cmd.insert(-1, archive_prefix) (rc, out, err) = module.run_command(cmd, cwd=dest) if rc != 0: module.fail_json(msg="Failed to perform archive operation", details="Git archive command failed to create " "archive %s using %s directory." "Error: %s" % (archive, dest, err)) return rc, out, err def create_archive(git_path, module, dest, archive, archive_prefix, version, repo, result): """ Helper function for creating archive using git_archive """ all_archive_fmt = {'.zip': 'zip', '.gz': 'tar.gz', '.tar': 'tar', '.tgz': 'tgz'} _, archive_ext = os.path.splitext(archive) archive_fmt = all_archive_fmt.get(archive_ext, None) if archive_fmt is None: module.fail_json(msg="Unable to get file extension from " "archive file name : %s" % archive, details="Please specify archive as filename with " "extension. File extension can be one " "of ['tar', 'tar.gz', 'zip', 'tgz']") repo_name = repo.split("/")[-1].replace(".git", "") if os.path.exists(archive): # If git archive file exists, then compare it with new git archive file. # if match, do nothing # if does not match, then replace existing with temp archive file. tempdir = tempfile.mkdtemp() new_archive_dest = os.path.join(tempdir, repo_name) new_archive = new_archive_dest + '.' + archive_fmt git_archive(git_path, module, dest, new_archive, archive_fmt, archive_prefix, version) # filecmp is supposed to be efficient than md5sum checksum if filecmp.cmp(new_archive, archive): result.update(changed=False) # Cleanup before exiting try: shutil.rmtree(tempdir) except OSError: pass else: try: shutil.move(new_archive, archive) shutil.rmtree(tempdir) result.update(changed=True) except OSError as e: module.fail_json(msg="Failed to move %s to %s" % (new_archive, archive), details=u"Error occurred while moving : %s" % to_text(e)) else: # Perform archive from local directory git_archive(git_path, module, dest, archive, archive_fmt, archive_prefix, version) result.update(changed=True) # =========================================== def main(): module = AnsibleModule( argument_spec=dict( dest=dict(type='path'), repo=dict(required=True, aliases=['name']), version=dict(default='HEAD'), remote=dict(default='origin'), refspec=dict(default=None), reference=dict(default=None), force=dict(default='no', type='bool'), depth=dict(default=None, type='int'), clone=dict(default='yes', type='bool'), update=dict(default='yes', type='bool'), verify_commit=dict(default='no', type='bool'), gpg_whitelist=dict(default=[], type='list', elements='str'), accept_hostkey=dict(default='no', type='bool'), accept_newhostkey=dict(default='no', type='bool'), key_file=dict(default=None, type='path', required=False), ssh_opts=dict(default=None, required=False), executable=dict(default=None, type='path'), bare=dict(default='no', type='bool'), recursive=dict(default='yes', type='bool'), single_branch=dict(default=False, type='bool'), track_submodules=dict(default='no', type='bool'), umask=dict(default=None, type='raw'), archive=dict(type='path'), archive_prefix=dict(), separate_git_dir=dict(type='path'), ), mutually_exclusive=[('separate_git_dir', 'bare'), ('accept_hostkey', 'accept_newhostkey')], required_by={'archive_prefix': ['archive']}, supports_check_mode=True ) dest = module.params['dest'] repo = module.params['repo'] version = module.params['version'] remote = module.params['remote'] refspec = module.params['refspec'] force = module.params['force'] depth = module.params['depth'] update = module.params['update'] allow_clone = module.params['clone'] bare = module.params['bare'] verify_commit = module.params['verify_commit'] gpg_whitelist = module.params['gpg_whitelist'] reference = module.params['reference'] single_branch = module.params['single_branch'] git_path = module.params['executable'] or module.get_bin_path('git', True) key_file = module.params['key_file'] ssh_opts = module.params['ssh_opts'] umask = module.params['umask'] archive = module.params['archive'] archive_prefix = module.params['archive_prefix'] separate_git_dir = module.params['separate_git_dir'] result = dict(changed=False, warnings=list()) if module.params['accept_hostkey']: if ssh_opts is not None: if ("-o StrictHostKeyChecking=no" not in ssh_opts) and ("-o StrictHostKeyChecking=accept-new" not in ssh_opts): ssh_opts += " -o StrictHostKeyChecking=no" else: ssh_opts = "-o StrictHostKeyChecking=no" if module.params['accept_newhostkey']: if not ssh_supports_acceptnewhostkey(module): module.warn("Your ssh client does not support accept_newhostkey option, therefore it cannot be used.") else: if ssh_opts is not None: if ("-o StrictHostKeyChecking=no" not in ssh_opts) and ("-o StrictHostKeyChecking=accept-new" not in ssh_opts): ssh_opts += " -o StrictHostKeyChecking=accept-new" else: ssh_opts = "-o StrictHostKeyChecking=accept-new" # evaluate and set the umask before doing anything else if umask is not None: if not isinstance(umask, string_types): module.fail_json(msg="umask must be defined as a quoted octal integer") try: umask = int(umask, 8) except Exception: module.fail_json(msg="umask must be an octal integer", details=to_text(sys.exc_info()[1])) os.umask(umask) # Certain features such as depth require a file:/// protocol for path based urls # so force a protocol here ... if os.path.expanduser(repo).startswith('/'): repo = 'file://' + os.path.expanduser(repo) # We screenscrape a huge amount of git commands so use C locale anytime we # call run_command() locale = get_best_parsable_locale(module) module.run_command_environ_update = dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale, LC_CTYPE=locale) if separate_git_dir: separate_git_dir = os.path.realpath(separate_git_dir) gitconfig = None if not dest and allow_clone: module.fail_json(msg="the destination directory must be specified unless clone=no") elif dest: dest = os.path.abspath(dest) try: repo_path = get_repo_path(dest, bare) if separate_git_dir and os.path.exists(repo_path) and separate_git_dir != repo_path: result['changed'] = True if not module.check_mode: relocate_repo(module, result, separate_git_dir, repo_path, dest) repo_path = separate_git_dir except (IOError, ValueError) as err: # No repo path found """``.git`` file does not have a valid format for detached Git dir.""" module.fail_json( msg='Current repo does not have a valid reference to a ' 'separate Git dir or it refers to the invalid path', details=to_text(err), ) gitconfig = os.path.join(repo_path, 'config') # create a wrapper script and export # GIT_SSH=<path> as an environment variable # for git to use the wrapper script ssh_wrapper = write_ssh_wrapper(module.tmpdir) set_git_ssh(ssh_wrapper, key_file, ssh_opts) module.add_cleanup_file(path=ssh_wrapper) git_version_used = git_version(git_path, module) if depth is not None and git_version_used < LooseVersion('1.9.1'): module.warn("git version is too old to fully support the depth argument. Falling back to full checkouts.") depth = None recursive = module.params['recursive'] track_submodules = module.params['track_submodules'] result.update(before=None) local_mods = False if (dest and not os.path.exists(gitconfig)) or (not dest and not allow_clone): # if there is no git configuration, do a clone operation unless: # * the user requested no clone (they just want info) # * we're doing a check mode test # In those cases we do an ls-remote if module.check_mode or not allow_clone: remote_head = get_remote_head(git_path, module, dest, version, repo, bare) result.update(changed=True, after=remote_head) if module._diff: diff = get_diff(module, git_path, dest, repo, remote, depth, bare, result['before'], result['after']) if diff: result['diff'] = diff module.exit_json(**result) # there's no git config, so clone clone(git_path, module, repo, dest, remote, depth, version, bare, reference, refspec, git_version_used, verify_commit, separate_git_dir, result, gpg_whitelist, single_branch) elif not update: # Just return having found a repo already in the dest path # this does no checking that the repo is the actual repo # requested. result['before'] = get_version(module, git_path, dest) result.update(after=result['before']) if archive: # Git archive is not supported by all git servers, so # we will first clone and perform git archive from local directory if module.check_mode: result.update(changed=True) module.exit_json(**result) create_archive(git_path, module, dest, archive, archive_prefix, version, repo, result) module.exit_json(**result) else: # else do a pull local_mods = has_local_mods(module, git_path, dest, bare) result['before'] = get_version(module, git_path, dest) if local_mods: # failure should happen regardless of check mode if not force: module.fail_json(msg="Local modifications exist in the destination: " + dest + " (force=no).", **result) # if force and in non-check mode, do a reset if not module.check_mode: reset(git_path, module, dest) result.update(changed=True, msg='Local modifications exist in the destination: ' + dest) # exit if already at desired sha version if module.check_mode: remote_url = get_remote_url(git_path, module, dest, remote) remote_url_changed = remote_url and remote_url != repo and unfrackgitpath(remote_url) != unfrackgitpath(repo) else: remote_url_changed = set_remote_url(git_path, module, repo, dest, remote) result.update(remote_url_changed=remote_url_changed) if module.check_mode: remote_head = get_remote_head(git_path, module, dest, version, remote, bare) result.update(changed=(result['before'] != remote_head or remote_url_changed), after=remote_head) # FIXME: This diff should fail since the new remote_head is not fetched yet?! if module._diff: diff = get_diff(module, git_path, dest, repo, remote, depth, bare, result['before'], result['after']) if diff: result['diff'] = diff module.exit_json(**result) else: fetch(git_path, module, repo, dest, version, remote, depth, bare, refspec, git_version_used, force=force) result['after'] = get_version(module, git_path, dest) # switch to version specified regardless of whether # we got new revisions from the repository if not bare: switch_version(git_path, module, dest, remote, version, verify_commit, depth, gpg_whitelist) # Deal with submodules submodules_updated = False if recursive and not bare: submodules_updated = submodules_fetch(git_path, module, remote, track_submodules, dest) if submodules_updated: result.update(submodules_changed=submodules_updated) if module.check_mode: result.update(changed=True, after=remote_head) module.exit_json(**result) # Switch to version specified submodule_update(git_path, module, dest, track_submodules, force=force) # determine if we changed anything result['after'] = get_version(module, git_path, dest) if result['before'] != result['after'] or local_mods or submodules_updated or remote_url_changed: result.update(changed=True) if module._diff: diff = get_diff(module, git_path, dest, repo, remote, depth, bare, result['before'], result['after']) if diff: result['diff'] = diff if archive: # Git archive is not supported by all git servers, so # we will first clone and perform git archive from local directory if module.check_mode: result.update(changed=True) module.exit_json(**result) create_archive(git_path, module, dest, archive, archive_prefix, version, repo, result) module.exit_json(**result) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
64,673
The git module should provide warning if the ssh_opts parameter is supplied but the version of git is 2.2.3 or earlier
<!--- Verify first that your feature was not already discussed on GitHub --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Describe the new feature/improvement briefly below --> The `ssh_opts` parameter relies on setting environment variables for git, such as GIT_SSH. Git did not recognize this environment variable until version 2.3.0 (see https://git-scm.com/docs/git/2.3.0), and many contemporary OSes (RHEL 7, CentOS7) include older versions of Git (1.8.3.1). It would be useful to provide a warning in this case, similar to the warning for the `depth` parameter. E.g. "Your git version is too old to support the ssh_opts argument." ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> git.py source control, git module ##### ADDITIONAL INFORMATION <!--- Describe how the feature would be used, why it is needed and what it would solve --> It would save users time when trying to troubleshoot failed git connections via SSH. <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: clone a git repo git: repo: [email protected]:cherdt/hello_ansible.git dest: /opt/hello_ansible version: master ssh_opts: "{{-o IdentityFile=github_id_rsa}}" ``` <!--- HINT: You can also paste gist.github.com links for larger files -->
https://github.com/ansible/ansible/issues/64673
https://github.com/ansible/ansible/pull/73404
be19863e44cc6b78706147b25489a73d7c8fbcb5
b493c590bcee9b64e8ae02c17d4fde2331e0598b
2019-11-11T16:21:11Z
python
2022-02-07T21:05:16Z
changelogs/fragments/git_fixes.yml
closed
ansible/ansible
https://github.com/ansible/ansible
64,673
The git module should provide warning if the ssh_opts parameter is supplied but the version of git is 2.2.3 or earlier
<!--- Verify first that your feature was not already discussed on GitHub --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Describe the new feature/improvement briefly below --> The `ssh_opts` parameter relies on setting environment variables for git, such as GIT_SSH. Git did not recognize this environment variable until version 2.3.0 (see https://git-scm.com/docs/git/2.3.0), and many contemporary OSes (RHEL 7, CentOS7) include older versions of Git (1.8.3.1). It would be useful to provide a warning in this case, similar to the warning for the `depth` parameter. E.g. "Your git version is too old to support the ssh_opts argument." ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> git.py source control, git module ##### ADDITIONAL INFORMATION <!--- Describe how the feature would be used, why it is needed and what it would solve --> It would save users time when trying to troubleshoot failed git connections via SSH. <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: clone a git repo git: repo: [email protected]:cherdt/hello_ansible.git dest: /opt/hello_ansible version: master ssh_opts: "{{-o IdentityFile=github_id_rsa}}" ``` <!--- HINT: You can also paste gist.github.com links for larger files -->
https://github.com/ansible/ansible/issues/64673
https://github.com/ansible/ansible/pull/73404
be19863e44cc6b78706147b25489a73d7c8fbcb5
b493c590bcee9b64e8ae02c17d4fde2331e0598b
2019-11-11T16:21:11Z
python
2022-02-07T21:05:16Z
lib/ansible/modules/git.py
# -*- coding: utf-8 -*- # (c) 2012, Michael DeHaan <[email protected]> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type DOCUMENTATION = ''' --- module: git author: - "Ansible Core Team" - "Michael DeHaan" version_added: "0.0.1" short_description: Deploy software (or files) from git checkouts description: - Manage I(git) checkouts of repositories to deploy files or software. extends_documentation_fragment: action_common_attributes options: repo: description: - git, SSH, or HTTP(S) protocol address of the git repository. type: str required: true aliases: [ name ] dest: description: - The path of where the repository should be checked out. This is equivalent to C(git clone [repo_url] [directory]). The repository named in I(repo) is not appended to this path and the destination directory must be empty. This parameter is required, unless I(clone) is set to C(no). type: path required: true version: description: - What version of the repository to check out. This can be the literal string C(HEAD), a branch name, a tag name. It can also be a I(SHA-1) hash, in which case I(refspec) needs to be specified if the given revision is not already available. type: str default: "HEAD" accept_hostkey: description: - If C(yes), ensure that "-o StrictHostKeyChecking=no" is present as an ssh option. type: bool default: 'no' version_added: "1.5" accept_newhostkey: description: - As of OpenSSH 7.5, "-o StrictHostKeyChecking=accept-new" can be used which is safer and will only accepts host keys which are not present or are the same. if C(yes), ensure that "-o StrictHostKeyChecking=accept-new" is present as an ssh option. type: bool default: 'no' version_added: "2.12" ssh_opts: description: - Creates a wrapper script and exports the path as GIT_SSH which git then automatically uses to override ssh arguments. An example value could be "-o StrictHostKeyChecking=no" (although this particular option is better set by I(accept_hostkey)). type: str version_added: "1.5" key_file: description: - Specify an optional private key file path, on the target host, to use for the checkout. type: path version_added: "1.5" reference: description: - Reference repository (see "git clone --reference ..."). version_added: "1.4" remote: description: - Name of the remote. type: str default: "origin" refspec: description: - Add an additional refspec to be fetched. If version is set to a I(SHA-1) not reachable from any branch or tag, this option may be necessary to specify the ref containing the I(SHA-1). Uses the same syntax as the C(git fetch) command. An example value could be "refs/meta/config". type: str version_added: "1.9" force: description: - If C(yes), any modified files in the working repository will be discarded. Prior to 0.7, this was always C(yes) and could not be disabled. Prior to 1.9, the default was C(yes). type: bool default: 'no' version_added: "0.7" depth: description: - Create a shallow clone with a history truncated to the specified number or revisions. The minimum possible value is C(1), otherwise ignored. Needs I(git>=1.9.1) to work correctly. type: int version_added: "1.2" clone: description: - If C(no), do not clone the repository even if it does not exist locally. type: bool default: 'yes' version_added: "1.9" update: description: - If C(no), do not retrieve new revisions from the origin repository. - Operations like archive will work on the existing (old) repository and might not respond to changes to the options version or remote. type: bool default: 'yes' version_added: "1.2" executable: description: - Path to git executable to use. If not supplied, the normal mechanism for resolving binary paths will be used. type: path version_added: "1.4" bare: description: - If C(yes), repository will be created as a bare repo, otherwise it will be a standard repo with a workspace. type: bool default: 'no' version_added: "1.4" umask: description: - The umask to set before doing any checkouts, or any other repository maintenance. type: raw version_added: "2.2" recursive: description: - If C(no), repository will be cloned without the --recursive option, skipping sub-modules. type: bool default: 'yes' version_added: "1.6" single_branch: description: - Clone only the history leading to the tip of the specified revision. type: bool default: 'no' version_added: '2.11' track_submodules: description: - If C(yes), submodules will track the latest commit on their master branch (or other branch specified in .gitmodules). If C(no), submodules will be kept at the revision specified by the main project. This is equivalent to specifying the --remote flag to git submodule update. type: bool default: 'no' version_added: "1.8" verify_commit: description: - If C(yes), when cloning or checking out a I(version) verify the signature of a GPG signed commit. This requires git version>=2.1.0 to be installed. The commit MUST be signed and the public key MUST be present in the GPG keyring. type: bool default: 'no' version_added: "2.0" archive: description: - Specify archive file path with extension. If specified, creates an archive file of the specified format containing the tree structure for the source tree. Allowed archive formats ["zip", "tar.gz", "tar", "tgz"]. - This will clone and perform git archive from local directory as not all git servers support git archive. type: path version_added: "2.4" archive_prefix: description: - Specify a prefix to add to each file path in archive. Requires I(archive) to be specified. version_added: "2.10" type: str separate_git_dir: description: - The path to place the cloned repository. If specified, Git repository can be separated from working tree. type: path version_added: "2.7" gpg_whitelist: description: - A list of trusted GPG fingerprints to compare to the fingerprint of the GPG-signed commit. - Only used when I(verify_commit=yes). - Use of this feature requires Git 2.6+ due to its reliance on git's C(--raw) flag to C(verify-commit) and C(verify-tag). type: list elements: str default: [] version_added: "2.9" requirements: - git>=1.7.1 (the command line tool) attributes: check_mode: support: full diff_mode: support: full platform: platforms: posix notes: - "If the task seems to be hanging, first verify remote host is in C(known_hosts). SSH will prompt user to authorize the first contact with a remote host. To avoid this prompt, one solution is to use the option accept_hostkey. Another solution is to add the remote host public key in C(/etc/ssh/ssh_known_hosts) before calling the git module, with the following command: ssh-keyscan -H remote_host.com >> /etc/ssh/ssh_known_hosts." ''' EXAMPLES = ''' - name: Git checkout ansible.builtin.git: repo: 'https://foosball.example.org/path/to/repo.git' dest: /srv/checkout version: release-0.22 - name: Read-write git checkout from github ansible.builtin.git: repo: [email protected]:mylogin/hello.git dest: /home/mylogin/hello - name: Just ensuring the repo checkout exists ansible.builtin.git: repo: 'https://foosball.example.org/path/to/repo.git' dest: /srv/checkout update: no - name: Just get information about the repository whether or not it has already been cloned locally ansible.builtin.git: repo: 'https://foosball.example.org/path/to/repo.git' dest: /srv/checkout clone: no update: no - name: Checkout a github repo and use refspec to fetch all pull requests ansible.builtin.git: repo: https://github.com/ansible/ansible-examples.git dest: /src/ansible-examples refspec: '+refs/pull/*:refs/heads/*' - name: Create git archive from repo ansible.builtin.git: repo: https://github.com/ansible/ansible-examples.git dest: /src/ansible-examples archive: /tmp/ansible-examples.zip - name: Clone a repo with separate git directory ansible.builtin.git: repo: https://github.com/ansible/ansible-examples.git dest: /src/ansible-examples separate_git_dir: /src/ansible-examples.git - name: Example clone of a single branch ansible.builtin.git: repo: https://github.com/ansible/ansible-examples.git dest: /src/ansible-examples single_branch: yes version: master - name: Avoid hanging when http(s) password is missing ansible.builtin.git: repo: https://github.com/ansible/could-be-a-private-repo dest: /src/from-private-repo environment: GIT_TERMINAL_PROMPT: 0 # reports "terminal prompts disabled" on missing password # or GIT_ASKPASS: /bin/true # for git before version 2.3.0, reports "Authentication failed" on missing password ''' RETURN = ''' after: description: Last commit revision of the repository retrieved during the update. returned: success type: str sample: 4c020102a9cd6fe908c9a4a326a38f972f63a903 before: description: Commit revision before the repository was updated, "null" for new repository. returned: success type: str sample: 67c04ebe40a003bda0efb34eacfb93b0cafdf628 remote_url_changed: description: Contains True or False whether or not the remote URL was changed. returned: success type: bool sample: True warnings: description: List of warnings if requested features were not available due to a too old git version. returned: error type: str sample: git version is too old to fully support the depth argument. Falling back to full checkouts. git_dir_now: description: Contains the new path of .git directory if it is changed. returned: success type: str sample: /path/to/new/git/dir git_dir_before: description: Contains the original path of .git directory if it is changed. returned: success type: str sample: /path/to/old/git/dir ''' import filecmp import os import re import shlex import stat import sys import shutil import tempfile from ansible.module_utils.compat.version import LooseVersion from ansible.module_utils._text import to_native, to_text from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.common.locale import get_best_parsable_locale from ansible.module_utils.common.process import get_bin_path from ansible.module_utils.six import b, string_types def relocate_repo(module, result, repo_dir, old_repo_dir, worktree_dir): if os.path.exists(repo_dir): module.fail_json(msg='Separate-git-dir path %s already exists.' % repo_dir) if worktree_dir: dot_git_file_path = os.path.join(worktree_dir, '.git') try: shutil.move(old_repo_dir, repo_dir) with open(dot_git_file_path, 'w') as dot_git_file: dot_git_file.write('gitdir: %s' % repo_dir) result['git_dir_before'] = old_repo_dir result['git_dir_now'] = repo_dir except (IOError, OSError) as err: # if we already moved the .git dir, roll it back if os.path.exists(repo_dir): shutil.move(repo_dir, old_repo_dir) module.fail_json(msg=u'Unable to move git dir. %s' % to_text(err)) def head_splitter(headfile, remote, module=None, fail_on_error=False): '''Extract the head reference''' # https://github.com/ansible/ansible-modules-core/pull/907 res = None if os.path.exists(headfile): rawdata = None try: f = open(headfile, 'r') rawdata = f.readline() f.close() except Exception: if fail_on_error and module: module.fail_json(msg="Unable to read %s" % headfile) if rawdata: try: rawdata = rawdata.replace('refs/remotes/%s' % remote, '', 1) refparts = rawdata.split(' ') newref = refparts[-1] nrefparts = newref.split('/', 2) res = nrefparts[-1].rstrip('\n') except Exception: if fail_on_error and module: module.fail_json(msg="Unable to split head from '%s'" % rawdata) return res def unfrackgitpath(path): if path is None: return None # copied from ansible.utils.path return os.path.normpath(os.path.realpath(os.path.expanduser(os.path.expandvars(path)))) def get_submodule_update_params(module, git_path, cwd): # or: git submodule [--quiet] update [--init] [-N|--no-fetch] # [-f|--force] [--rebase] [--reference <repository>] [--merge] # [--recursive] [--] [<path>...] params = [] # run a bad submodule command to get valid params cmd = "%s submodule update --help" % (git_path) rc, stdout, stderr = module.run_command(cmd, cwd=cwd) lines = stderr.split('\n') update_line = None for line in lines: if 'git submodule [--quiet] update ' in line: update_line = line if update_line: update_line = update_line.replace('[', '') update_line = update_line.replace(']', '') update_line = update_line.replace('|', ' ') parts = shlex.split(update_line) for part in parts: if part.startswith('--'): part = part.replace('--', '') params.append(part) return params def write_ssh_wrapper(module_tmpdir): try: # make sure we have full permission to the module_dir, which # may not be the case if we're sudo'ing to a non-root user if os.access(module_tmpdir, os.W_OK | os.R_OK | os.X_OK): fd, wrapper_path = tempfile.mkstemp(prefix=module_tmpdir + '/') else: raise OSError except (IOError, OSError): fd, wrapper_path = tempfile.mkstemp() fh = os.fdopen(fd, 'w+b') template = b("""#!/bin/sh if [ -z "$GIT_SSH_OPTS" ]; then BASEOPTS="" else BASEOPTS=$GIT_SSH_OPTS fi # Let ssh fail rather than prompt BASEOPTS="$BASEOPTS -o BatchMode=yes" if [ -z "$GIT_KEY" ]; then ssh $BASEOPTS "$@" else ssh -i "$GIT_KEY" -o IdentitiesOnly=yes $BASEOPTS "$@" fi """) fh.write(template) fh.close() st = os.stat(wrapper_path) os.chmod(wrapper_path, st.st_mode | stat.S_IEXEC) return wrapper_path def set_git_ssh(ssh_wrapper, key_file, ssh_opts): if os.environ.get("GIT_SSH"): del os.environ["GIT_SSH"] os.environ["GIT_SSH"] = ssh_wrapper if os.environ.get("GIT_KEY"): del os.environ["GIT_KEY"] if key_file: os.environ["GIT_KEY"] = key_file if os.environ.get("GIT_SSH_OPTS"): del os.environ["GIT_SSH_OPTS"] if ssh_opts: os.environ["GIT_SSH_OPTS"] = ssh_opts def get_version(module, git_path, dest, ref="HEAD"): ''' samples the version of the git repo ''' cmd = "%s rev-parse %s" % (git_path, ref) rc, stdout, stderr = module.run_command(cmd, cwd=dest) sha = to_native(stdout).rstrip('\n') return sha def ssh_supports_acceptnewhostkey(module): try: ssh_path = get_bin_path('ssh') except ValueError as err: module.fail_json( msg='Remote host is missing ssh command, so you cannot ' 'use acceptnewhostkey option.', details=to_text(err)) supports_acceptnewhostkey = True cmd = [ssh_path, '-o', 'StrictHostKeyChecking=accept-new', '-V'] rc, stdout, stderr = module.run_command(cmd) if rc != 0: supports_acceptnewhostkey = False return supports_acceptnewhostkey def get_submodule_versions(git_path, module, dest, version='HEAD'): cmd = [git_path, 'submodule', 'foreach', git_path, 'rev-parse', version] (rc, out, err) = module.run_command(cmd, cwd=dest) if rc != 0: module.fail_json( msg='Unable to determine hashes of submodules', stdout=out, stderr=err, rc=rc) submodules = {} subm_name = None for line in out.splitlines(): if line.startswith("Entering '"): subm_name = line[10:-1] elif len(line.strip()) == 40: if subm_name is None: module.fail_json() submodules[subm_name] = line.strip() subm_name = None else: module.fail_json(msg='Unable to parse submodule hash line: %s' % line.strip()) if subm_name is not None: module.fail_json(msg='Unable to find hash for submodule: %s' % subm_name) return submodules def clone(git_path, module, repo, dest, remote, depth, version, bare, reference, refspec, git_version_used, verify_commit, separate_git_dir, result, gpg_whitelist, single_branch): ''' makes a new git repo if it does not already exist ''' dest_dirname = os.path.dirname(dest) try: os.makedirs(dest_dirname) except Exception: pass cmd = [git_path, 'clone'] if bare: cmd.append('--bare') else: cmd.extend(['--origin', remote]) is_branch_or_tag = is_remote_branch(git_path, module, dest, repo, version) or is_remote_tag(git_path, module, dest, repo, version) if depth: if version == 'HEAD' or refspec: cmd.extend(['--depth', str(depth)]) elif is_branch_or_tag: cmd.extend(['--depth', str(depth)]) cmd.extend(['--branch', version]) else: # only use depth if the remote object is branch or tag (i.e. fetchable) module.warn("Ignoring depth argument. " "Shallow clones are only available for " "HEAD, branches, tags or in combination with refspec.") if reference: cmd.extend(['--reference', str(reference)]) if single_branch: if git_version_used is None: module.fail_json(msg='Cannot find git executable at %s' % git_path) if git_version_used < LooseVersion('1.7.10'): module.warn("git version '%s' is too old to use 'single-branch'. Ignoring." % git_version_used) else: cmd.append("--single-branch") if is_branch_or_tag: cmd.extend(['--branch', version]) needs_separate_git_dir_fallback = False if separate_git_dir: if git_version_used is None: module.fail_json(msg='Cannot find git executable at %s' % git_path) if git_version_used < LooseVersion('1.7.5'): # git before 1.7.5 doesn't have separate-git-dir argument, do fallback needs_separate_git_dir_fallback = True else: cmd.append('--separate-git-dir=%s' % separate_git_dir) cmd.extend([repo, dest]) module.run_command(cmd, check_rc=True, cwd=dest_dirname) if needs_separate_git_dir_fallback: relocate_repo(module, result, separate_git_dir, os.path.join(dest, ".git"), dest) if bare and remote != 'origin': module.run_command([git_path, 'remote', 'add', remote, repo], check_rc=True, cwd=dest) if refspec: cmd = [git_path, 'fetch'] if depth: cmd.extend(['--depth', str(depth)]) cmd.extend([remote, refspec]) module.run_command(cmd, check_rc=True, cwd=dest) if verify_commit: verify_commit_sign(git_path, module, dest, version, gpg_whitelist) def has_local_mods(module, git_path, dest, bare): if bare: return False cmd = "%s status --porcelain" % (git_path) rc, stdout, stderr = module.run_command(cmd, cwd=dest) lines = stdout.splitlines() lines = list(filter(lambda c: not re.search('^\\?\\?.*$', c), lines)) return len(lines) > 0 def reset(git_path, module, dest): ''' Resets the index and working tree to HEAD. Discards any changes to tracked files in working tree since that commit. ''' cmd = "%s reset --hard HEAD" % (git_path,) return module.run_command(cmd, check_rc=True, cwd=dest) def get_diff(module, git_path, dest, repo, remote, depth, bare, before, after): ''' Return the difference between 2 versions ''' if before is None: return {'prepared': '>> Newly checked out %s' % after} elif before != after: # Ensure we have the object we are referring to during git diff ! git_version_used = git_version(git_path, module) fetch(git_path, module, repo, dest, after, remote, depth, bare, '', git_version_used) cmd = '%s diff %s %s' % (git_path, before, after) (rc, out, err) = module.run_command(cmd, cwd=dest) if rc == 0 and out: return {'prepared': out} elif rc == 0: return {'prepared': '>> No visual differences between %s and %s' % (before, after)} elif err: return {'prepared': '>> Failed to get proper diff between %s and %s:\n>> %s' % (before, after, err)} else: return {'prepared': '>> Failed to get proper diff between %s and %s' % (before, after)} return {} def get_remote_head(git_path, module, dest, version, remote, bare): cloning = False cwd = None tag = False if remote == module.params['repo']: cloning = True elif remote == 'file://' + os.path.expanduser(module.params['repo']): cloning = True else: cwd = dest if version == 'HEAD': if cloning: # cloning the repo, just get the remote's HEAD version cmd = '%s ls-remote %s -h HEAD' % (git_path, remote) else: head_branch = get_head_branch(git_path, module, dest, remote, bare) cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, head_branch) elif is_remote_branch(git_path, module, dest, remote, version): cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, version) elif is_remote_tag(git_path, module, dest, remote, version): tag = True cmd = '%s ls-remote %s -t refs/tags/%s*' % (git_path, remote, version) else: # appears to be a sha1. return as-is since it appears # cannot check for a specific sha1 on remote return version (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=cwd) if len(out) < 1: module.fail_json(msg="Could not determine remote revision for %s" % version, stdout=out, stderr=err, rc=rc) out = to_native(out) if tag: # Find the dereferenced tag if this is an annotated tag. for tag in out.split('\n'): if tag.endswith(version + '^{}'): out = tag break elif tag.endswith(version): out = tag rev = out.split()[0] return rev def is_remote_tag(git_path, module, dest, remote, version): cmd = '%s ls-remote %s -t refs/tags/%s' % (git_path, remote, version) (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest) if to_native(version, errors='surrogate_or_strict') in out: return True else: return False def get_branches(git_path, module, dest): branches = [] cmd = '%s branch --no-color -a' % (git_path,) (rc, out, err) = module.run_command(cmd, cwd=dest) if rc != 0: module.fail_json(msg="Could not determine branch data - received %s" % out, stdout=out, stderr=err) for line in out.split('\n'): if line.strip(): branches.append(line.strip()) return branches def get_annotated_tags(git_path, module, dest): tags = [] cmd = [git_path, 'for-each-ref', 'refs/tags/', '--format', '%(objecttype):%(refname:short)'] (rc, out, err) = module.run_command(cmd, cwd=dest) if rc != 0: module.fail_json(msg="Could not determine tag data - received %s" % out, stdout=out, stderr=err) for line in to_native(out).split('\n'): if line.strip(): tagtype, tagname = line.strip().split(':') if tagtype == 'tag': tags.append(tagname) return tags def is_remote_branch(git_path, module, dest, remote, version): cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, version) (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest) if to_native(version, errors='surrogate_or_strict') in out: return True else: return False def is_local_branch(git_path, module, dest, branch): branches = get_branches(git_path, module, dest) lbranch = '%s' % branch if lbranch in branches: return True elif '* %s' % branch in branches: return True else: return False def is_not_a_branch(git_path, module, dest): branches = get_branches(git_path, module, dest) for branch in branches: if branch.startswith('* ') and ('no branch' in branch or 'detached from' in branch or 'detached at' in branch): return True return False def get_repo_path(dest, bare): if bare: repo_path = dest else: repo_path = os.path.join(dest, '.git') # Check if the .git is a file. If it is a file, it means that the repository is in external directory respective to the working copy (e.g. we are in a # submodule structure). if os.path.isfile(repo_path): with open(repo_path, 'r') as gitfile: data = gitfile.read() ref_prefix, gitdir = data.rstrip().split('gitdir: ', 1) if ref_prefix: raise ValueError('.git file has invalid git dir reference format') # There is a possibility the .git file to have an absolute path. if os.path.isabs(gitdir): repo_path = gitdir else: # Use original destination directory with data from .git file. repo_path = os.path.join(dest, gitdir) if not os.path.isdir(repo_path): raise ValueError('%s is not a directory' % repo_path) return repo_path def get_head_branch(git_path, module, dest, remote, bare=False): ''' Determine what branch HEAD is associated with. This is partly taken from lib/ansible/utils/__init__.py. It finds the correct path to .git/HEAD and reads from that file the branch that HEAD is associated with. In the case of a detached HEAD, this will look up the branch in .git/refs/remotes/<remote>/HEAD. ''' try: repo_path = get_repo_path(dest, bare) except (IOError, ValueError) as err: # No repo path found """``.git`` file does not have a valid format for detached Git dir.""" module.fail_json( msg='Current repo does not have a valid reference to a ' 'separate Git dir or it refers to the invalid path', details=to_text(err), ) # Read .git/HEAD for the name of the branch. # If we're in a detached HEAD state, look up the branch associated with # the remote HEAD in .git/refs/remotes/<remote>/HEAD headfile = os.path.join(repo_path, "HEAD") if is_not_a_branch(git_path, module, dest): headfile = os.path.join(repo_path, 'refs', 'remotes', remote, 'HEAD') branch = head_splitter(headfile, remote, module=module, fail_on_error=True) return branch def get_remote_url(git_path, module, dest, remote): '''Return URL of remote source for repo.''' command = [git_path, 'ls-remote', '--get-url', remote] (rc, out, err) = module.run_command(command, cwd=dest) if rc != 0: # There was an issue getting remote URL, most likely # command is not available in this version of Git. return None return to_native(out).rstrip('\n') def set_remote_url(git_path, module, repo, dest, remote): ''' updates repo from remote sources ''' # Return if remote URL isn't changing. remote_url = get_remote_url(git_path, module, dest, remote) if remote_url == repo or unfrackgitpath(remote_url) == unfrackgitpath(repo): return False command = [git_path, 'remote', 'set-url', remote, repo] (rc, out, err) = module.run_command(command, cwd=dest) if rc != 0: label = "set a new url %s for %s" % (repo, remote) module.fail_json(msg="Failed to %s: %s %s" % (label, out, err)) # Return False if remote_url is None to maintain previous behavior # for Git versions prior to 1.7.5 that lack required functionality. return remote_url is not None def fetch(git_path, module, repo, dest, version, remote, depth, bare, refspec, git_version_used, force=False): ''' updates repo from remote sources ''' set_remote_url(git_path, module, repo, dest, remote) commands = [] fetch_str = 'download remote objects and refs' fetch_cmd = [git_path, 'fetch'] refspecs = [] if depth: # try to find the minimal set of refs we need to fetch to get a # successful checkout currenthead = get_head_branch(git_path, module, dest, remote) if refspec: refspecs.append(refspec) elif version == 'HEAD': refspecs.append(currenthead) elif is_remote_branch(git_path, module, dest, repo, version): if currenthead != version: # this workaround is only needed for older git versions # 1.8.3 is broken, 1.9.x works # ensure that remote branch is available as both local and remote ref refspecs.append('+refs/heads/%s:refs/heads/%s' % (version, version)) refspecs.append('+refs/heads/%s:refs/remotes/%s/%s' % (version, remote, version)) elif is_remote_tag(git_path, module, dest, repo, version): refspecs.append('+refs/tags/' + version + ':refs/tags/' + version) if refspecs: # if refspecs is empty, i.e. version is neither heads nor tags # assume it is a version hash # fall back to a full clone, otherwise we might not be able to checkout # version fetch_cmd.extend(['--depth', str(depth)]) if not depth or not refspecs: # don't try to be minimalistic but do a full clone # also do this if depth is given, but version is something that can't be fetched directly if bare: refspecs = ['+refs/heads/*:refs/heads/*', '+refs/tags/*:refs/tags/*'] else: # ensure all tags are fetched if git_version_used >= LooseVersion('1.9'): fetch_cmd.append('--tags') else: # old git versions have a bug in --tags that prevents updating existing tags commands.append((fetch_str, fetch_cmd + [remote])) refspecs = ['+refs/tags/*:refs/tags/*'] if refspec: refspecs.append(refspec) if force: fetch_cmd.append('--force') fetch_cmd.extend([remote]) commands.append((fetch_str, fetch_cmd + refspecs)) for (label, command) in commands: (rc, out, err) = module.run_command(command, cwd=dest) if rc != 0: module.fail_json(msg="Failed to %s: %s %s" % (label, out, err), cmd=command) def submodules_fetch(git_path, module, remote, track_submodules, dest): changed = False if not os.path.exists(os.path.join(dest, '.gitmodules')): # no submodules return changed gitmodules_file = open(os.path.join(dest, '.gitmodules'), 'r') for line in gitmodules_file: # Check for new submodules if not changed and line.strip().startswith('path'): path = line.split('=', 1)[1].strip() # Check that dest/path/.git exists if not os.path.exists(os.path.join(dest, path, '.git')): changed = True # Check for updates to existing modules if not changed: # Fetch updates begin = get_submodule_versions(git_path, module, dest) cmd = [git_path, 'submodule', 'foreach', git_path, 'fetch'] (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest) if rc != 0: module.fail_json(msg="Failed to fetch submodules: %s" % out + err) if track_submodules: # Compare against submodule HEAD # FIXME: determine this from .gitmodules version = 'master' after = get_submodule_versions(git_path, module, dest, '%s/%s' % (remote, version)) if begin != after: changed = True else: # Compare against the superproject's expectation cmd = [git_path, 'submodule', 'status'] (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest) if rc != 0: module.fail_json(msg='Failed to retrieve submodule status: %s' % out + err) for line in out.splitlines(): if line[0] != ' ': changed = True break return changed def submodule_update(git_path, module, dest, track_submodules, force=False): ''' init and update any submodules ''' # get the valid submodule params params = get_submodule_update_params(module, git_path, dest) # skip submodule commands if .gitmodules is not present if not os.path.exists(os.path.join(dest, '.gitmodules')): return (0, '', '') cmd = [git_path, 'submodule', 'sync'] (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest) if 'remote' in params and track_submodules: cmd = [git_path, 'submodule', 'update', '--init', '--recursive', '--remote'] else: cmd = [git_path, 'submodule', 'update', '--init', '--recursive'] if force: cmd.append('--force') (rc, out, err) = module.run_command(cmd, cwd=dest) if rc != 0: module.fail_json(msg="Failed to init/update submodules: %s" % out + err) return (rc, out, err) def set_remote_branch(git_path, module, dest, remote, version, depth): """set refs for the remote branch version This assumes the branch does not yet exist locally and is therefore also not checked out. Can't use git remote set-branches, as it is not available in git 1.7.1 (centos6) """ branchref = "+refs/heads/%s:refs/heads/%s" % (version, version) branchref += ' +refs/heads/%s:refs/remotes/%s/%s' % (version, remote, version) cmd = "%s fetch --depth=%s %s %s" % (git_path, depth, remote, branchref) (rc, out, err) = module.run_command(cmd, cwd=dest) if rc != 0: module.fail_json(msg="Failed to fetch branch from remote: %s" % version, stdout=out, stderr=err, rc=rc) def switch_version(git_path, module, dest, remote, version, verify_commit, depth, gpg_whitelist): cmd = '' if version == 'HEAD': branch = get_head_branch(git_path, module, dest, remote) (rc, out, err) = module.run_command("%s checkout --force %s" % (git_path, branch), cwd=dest) if rc != 0: module.fail_json(msg="Failed to checkout branch %s" % branch, stdout=out, stderr=err, rc=rc) cmd = "%s reset --hard %s/%s --" % (git_path, remote, branch) else: # FIXME check for local_branch first, should have been fetched already if is_remote_branch(git_path, module, dest, remote, version): if depth and not is_local_branch(git_path, module, dest, version): # git clone --depth implies --single-branch, which makes # the checkout fail if the version changes # fetch the remote branch, to be able to check it out next set_remote_branch(git_path, module, dest, remote, version, depth) if not is_local_branch(git_path, module, dest, version): cmd = "%s checkout --track -b %s %s/%s" % (git_path, version, remote, version) else: (rc, out, err) = module.run_command("%s checkout --force %s" % (git_path, version), cwd=dest) if rc != 0: module.fail_json(msg="Failed to checkout branch %s" % version, stdout=out, stderr=err, rc=rc) cmd = "%s reset --hard %s/%s" % (git_path, remote, version) else: cmd = "%s checkout --force %s" % (git_path, version) (rc, out1, err1) = module.run_command(cmd, cwd=dest) if rc != 0: if version != 'HEAD': module.fail_json(msg="Failed to checkout %s" % (version), stdout=out1, stderr=err1, rc=rc, cmd=cmd) else: module.fail_json(msg="Failed to checkout branch %s" % (branch), stdout=out1, stderr=err1, rc=rc, cmd=cmd) if verify_commit: verify_commit_sign(git_path, module, dest, version, gpg_whitelist) return (rc, out1, err1) def verify_commit_sign(git_path, module, dest, version, gpg_whitelist): if version in get_annotated_tags(git_path, module, dest): git_sub = "verify-tag" else: git_sub = "verify-commit" cmd = "%s %s %s" % (git_path, git_sub, version) if gpg_whitelist: cmd += " --raw" (rc, out, err) = module.run_command(cmd, cwd=dest) if rc != 0: module.fail_json(msg='Failed to verify GPG signature of commit/tag "%s"' % version, stdout=out, stderr=err, rc=rc) if gpg_whitelist: fingerprint = get_gpg_fingerprint(err) if fingerprint not in gpg_whitelist: module.fail_json(msg='The gpg_whitelist does not include the public key "%s" for this commit' % fingerprint, stdout=out, stderr=err, rc=rc) return (rc, out, err) def get_gpg_fingerprint(output): """Return a fingerprint of the primary key. Ref: https://git.gnupg.org/cgi-bin/gitweb.cgi?p=gnupg.git;a=blob;f=doc/DETAILS;hb=HEAD#l482 """ for line in output.splitlines(): data = line.split() if data[1] != 'VALIDSIG': continue # if signed with a subkey, this contains the primary key fingerprint data_id = 11 if len(data) == 11 else 2 return data[data_id] def git_version(git_path, module): """return the installed version of git""" cmd = "%s --version" % git_path (rc, out, err) = module.run_command(cmd) if rc != 0: # one could fail_json here, but the version info is not that important, # so let's try to fail only on actual git commands return None rematch = re.search('git version (.*)$', to_native(out)) if not rematch: return None return LooseVersion(rematch.groups()[0]) def git_archive(git_path, module, dest, archive, archive_fmt, archive_prefix, version): """ Create git archive in given source directory """ cmd = [git_path, 'archive', '--format', archive_fmt, '--output', archive, version] if archive_prefix is not None: cmd.insert(-1, '--prefix') cmd.insert(-1, archive_prefix) (rc, out, err) = module.run_command(cmd, cwd=dest) if rc != 0: module.fail_json(msg="Failed to perform archive operation", details="Git archive command failed to create " "archive %s using %s directory." "Error: %s" % (archive, dest, err)) return rc, out, err def create_archive(git_path, module, dest, archive, archive_prefix, version, repo, result): """ Helper function for creating archive using git_archive """ all_archive_fmt = {'.zip': 'zip', '.gz': 'tar.gz', '.tar': 'tar', '.tgz': 'tgz'} _, archive_ext = os.path.splitext(archive) archive_fmt = all_archive_fmt.get(archive_ext, None) if archive_fmt is None: module.fail_json(msg="Unable to get file extension from " "archive file name : %s" % archive, details="Please specify archive as filename with " "extension. File extension can be one " "of ['tar', 'tar.gz', 'zip', 'tgz']") repo_name = repo.split("/")[-1].replace(".git", "") if os.path.exists(archive): # If git archive file exists, then compare it with new git archive file. # if match, do nothing # if does not match, then replace existing with temp archive file. tempdir = tempfile.mkdtemp() new_archive_dest = os.path.join(tempdir, repo_name) new_archive = new_archive_dest + '.' + archive_fmt git_archive(git_path, module, dest, new_archive, archive_fmt, archive_prefix, version) # filecmp is supposed to be efficient than md5sum checksum if filecmp.cmp(new_archive, archive): result.update(changed=False) # Cleanup before exiting try: shutil.rmtree(tempdir) except OSError: pass else: try: shutil.move(new_archive, archive) shutil.rmtree(tempdir) result.update(changed=True) except OSError as e: module.fail_json(msg="Failed to move %s to %s" % (new_archive, archive), details=u"Error occurred while moving : %s" % to_text(e)) else: # Perform archive from local directory git_archive(git_path, module, dest, archive, archive_fmt, archive_prefix, version) result.update(changed=True) # =========================================== def main(): module = AnsibleModule( argument_spec=dict( dest=dict(type='path'), repo=dict(required=True, aliases=['name']), version=dict(default='HEAD'), remote=dict(default='origin'), refspec=dict(default=None), reference=dict(default=None), force=dict(default='no', type='bool'), depth=dict(default=None, type='int'), clone=dict(default='yes', type='bool'), update=dict(default='yes', type='bool'), verify_commit=dict(default='no', type='bool'), gpg_whitelist=dict(default=[], type='list', elements='str'), accept_hostkey=dict(default='no', type='bool'), accept_newhostkey=dict(default='no', type='bool'), key_file=dict(default=None, type='path', required=False), ssh_opts=dict(default=None, required=False), executable=dict(default=None, type='path'), bare=dict(default='no', type='bool'), recursive=dict(default='yes', type='bool'), single_branch=dict(default=False, type='bool'), track_submodules=dict(default='no', type='bool'), umask=dict(default=None, type='raw'), archive=dict(type='path'), archive_prefix=dict(), separate_git_dir=dict(type='path'), ), mutually_exclusive=[('separate_git_dir', 'bare'), ('accept_hostkey', 'accept_newhostkey')], required_by={'archive_prefix': ['archive']}, supports_check_mode=True ) dest = module.params['dest'] repo = module.params['repo'] version = module.params['version'] remote = module.params['remote'] refspec = module.params['refspec'] force = module.params['force'] depth = module.params['depth'] update = module.params['update'] allow_clone = module.params['clone'] bare = module.params['bare'] verify_commit = module.params['verify_commit'] gpg_whitelist = module.params['gpg_whitelist'] reference = module.params['reference'] single_branch = module.params['single_branch'] git_path = module.params['executable'] or module.get_bin_path('git', True) key_file = module.params['key_file'] ssh_opts = module.params['ssh_opts'] umask = module.params['umask'] archive = module.params['archive'] archive_prefix = module.params['archive_prefix'] separate_git_dir = module.params['separate_git_dir'] result = dict(changed=False, warnings=list()) if module.params['accept_hostkey']: if ssh_opts is not None: if ("-o StrictHostKeyChecking=no" not in ssh_opts) and ("-o StrictHostKeyChecking=accept-new" not in ssh_opts): ssh_opts += " -o StrictHostKeyChecking=no" else: ssh_opts = "-o StrictHostKeyChecking=no" if module.params['accept_newhostkey']: if not ssh_supports_acceptnewhostkey(module): module.warn("Your ssh client does not support accept_newhostkey option, therefore it cannot be used.") else: if ssh_opts is not None: if ("-o StrictHostKeyChecking=no" not in ssh_opts) and ("-o StrictHostKeyChecking=accept-new" not in ssh_opts): ssh_opts += " -o StrictHostKeyChecking=accept-new" else: ssh_opts = "-o StrictHostKeyChecking=accept-new" # evaluate and set the umask before doing anything else if umask is not None: if not isinstance(umask, string_types): module.fail_json(msg="umask must be defined as a quoted octal integer") try: umask = int(umask, 8) except Exception: module.fail_json(msg="umask must be an octal integer", details=to_text(sys.exc_info()[1])) os.umask(umask) # Certain features such as depth require a file:/// protocol for path based urls # so force a protocol here ... if os.path.expanduser(repo).startswith('/'): repo = 'file://' + os.path.expanduser(repo) # We screenscrape a huge amount of git commands so use C locale anytime we # call run_command() locale = get_best_parsable_locale(module) module.run_command_environ_update = dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale, LC_CTYPE=locale) if separate_git_dir: separate_git_dir = os.path.realpath(separate_git_dir) gitconfig = None if not dest and allow_clone: module.fail_json(msg="the destination directory must be specified unless clone=no") elif dest: dest = os.path.abspath(dest) try: repo_path = get_repo_path(dest, bare) if separate_git_dir and os.path.exists(repo_path) and separate_git_dir != repo_path: result['changed'] = True if not module.check_mode: relocate_repo(module, result, separate_git_dir, repo_path, dest) repo_path = separate_git_dir except (IOError, ValueError) as err: # No repo path found """``.git`` file does not have a valid format for detached Git dir.""" module.fail_json( msg='Current repo does not have a valid reference to a ' 'separate Git dir or it refers to the invalid path', details=to_text(err), ) gitconfig = os.path.join(repo_path, 'config') # create a wrapper script and export # GIT_SSH=<path> as an environment variable # for git to use the wrapper script ssh_wrapper = write_ssh_wrapper(module.tmpdir) set_git_ssh(ssh_wrapper, key_file, ssh_opts) module.add_cleanup_file(path=ssh_wrapper) git_version_used = git_version(git_path, module) if depth is not None and git_version_used < LooseVersion('1.9.1'): module.warn("git version is too old to fully support the depth argument. Falling back to full checkouts.") depth = None recursive = module.params['recursive'] track_submodules = module.params['track_submodules'] result.update(before=None) local_mods = False if (dest and not os.path.exists(gitconfig)) or (not dest and not allow_clone): # if there is no git configuration, do a clone operation unless: # * the user requested no clone (they just want info) # * we're doing a check mode test # In those cases we do an ls-remote if module.check_mode or not allow_clone: remote_head = get_remote_head(git_path, module, dest, version, repo, bare) result.update(changed=True, after=remote_head) if module._diff: diff = get_diff(module, git_path, dest, repo, remote, depth, bare, result['before'], result['after']) if diff: result['diff'] = diff module.exit_json(**result) # there's no git config, so clone clone(git_path, module, repo, dest, remote, depth, version, bare, reference, refspec, git_version_used, verify_commit, separate_git_dir, result, gpg_whitelist, single_branch) elif not update: # Just return having found a repo already in the dest path # this does no checking that the repo is the actual repo # requested. result['before'] = get_version(module, git_path, dest) result.update(after=result['before']) if archive: # Git archive is not supported by all git servers, so # we will first clone and perform git archive from local directory if module.check_mode: result.update(changed=True) module.exit_json(**result) create_archive(git_path, module, dest, archive, archive_prefix, version, repo, result) module.exit_json(**result) else: # else do a pull local_mods = has_local_mods(module, git_path, dest, bare) result['before'] = get_version(module, git_path, dest) if local_mods: # failure should happen regardless of check mode if not force: module.fail_json(msg="Local modifications exist in the destination: " + dest + " (force=no).", **result) # if force and in non-check mode, do a reset if not module.check_mode: reset(git_path, module, dest) result.update(changed=True, msg='Local modifications exist in the destination: ' + dest) # exit if already at desired sha version if module.check_mode: remote_url = get_remote_url(git_path, module, dest, remote) remote_url_changed = remote_url and remote_url != repo and unfrackgitpath(remote_url) != unfrackgitpath(repo) else: remote_url_changed = set_remote_url(git_path, module, repo, dest, remote) result.update(remote_url_changed=remote_url_changed) if module.check_mode: remote_head = get_remote_head(git_path, module, dest, version, remote, bare) result.update(changed=(result['before'] != remote_head or remote_url_changed), after=remote_head) # FIXME: This diff should fail since the new remote_head is not fetched yet?! if module._diff: diff = get_diff(module, git_path, dest, repo, remote, depth, bare, result['before'], result['after']) if diff: result['diff'] = diff module.exit_json(**result) else: fetch(git_path, module, repo, dest, version, remote, depth, bare, refspec, git_version_used, force=force) result['after'] = get_version(module, git_path, dest) # switch to version specified regardless of whether # we got new revisions from the repository if not bare: switch_version(git_path, module, dest, remote, version, verify_commit, depth, gpg_whitelist) # Deal with submodules submodules_updated = False if recursive and not bare: submodules_updated = submodules_fetch(git_path, module, remote, track_submodules, dest) if submodules_updated: result.update(submodules_changed=submodules_updated) if module.check_mode: result.update(changed=True, after=remote_head) module.exit_json(**result) # Switch to version specified submodule_update(git_path, module, dest, track_submodules, force=force) # determine if we changed anything result['after'] = get_version(module, git_path, dest) if result['before'] != result['after'] or local_mods or submodules_updated or remote_url_changed: result.update(changed=True) if module._diff: diff = get_diff(module, git_path, dest, repo, remote, depth, bare, result['before'], result['after']) if diff: result['diff'] = diff if archive: # Git archive is not supported by all git servers, so # we will first clone and perform git archive from local directory if module.check_mode: result.update(changed=True) module.exit_json(**result) create_archive(git_path, module, dest, archive, archive_prefix, version, repo, result) module.exit_json(**result) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
69,014
ansible-pull gives spurious inventory warning
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY When ansible-pull creates the ansible command to pull the repo, assuming no additional command line arguments have been supplied, it constructs both an --inventory argument and a --limit argument. The --inventory argument has only 'localhost,' while the --limit argument has 'localhost,<hostname>,127.0.0.1'. This is guaranteed to produce the warning: [WARNING]: Could not match supplied host pattern, ignoring: <hostname> ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible-pull ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.9.7 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/mwallace/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/dist-packages/ansible executable location = /usr/bin/ansible python version = 2.7.17 (default, Nov 7 2019, 10:07:09) [GCC 9.2.1 20191008] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below DEPRECATION_WARNINGS(/etc/ansible/ansible.cfg) = False ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Linux ccm-lc-fin-304 5.3.0-46-generic #38-Ubuntu SMP Fri Mar 27 17:37:05 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> sudo ansible-pull -U <git repo url> <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> I expect this to run with no warnings ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> I get the warning: [WARNING]: Could not match supplied host pattern, ignoring: <hostname> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/69014
https://github.com/ansible/ansible/pull/76965
6d2d476113b3a26e46c9917e213f09494fbc0a13
d4c9c103e2884dd88876909ffe8fda2fc776811a
2020-04-17T21:07:09Z
python
2022-02-07T21:10:30Z
changelogs/fragments/pull_fix_limit.yml
closed
ansible/ansible
https://github.com/ansible/ansible
69,014
ansible-pull gives spurious inventory warning
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY When ansible-pull creates the ansible command to pull the repo, assuming no additional command line arguments have been supplied, it constructs both an --inventory argument and a --limit argument. The --inventory argument has only 'localhost,' while the --limit argument has 'localhost,<hostname>,127.0.0.1'. This is guaranteed to produce the warning: [WARNING]: Could not match supplied host pattern, ignoring: <hostname> ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible-pull ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.9.7 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/mwallace/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/dist-packages/ansible executable location = /usr/bin/ansible python version = 2.7.17 (default, Nov 7 2019, 10:07:09) [GCC 9.2.1 20191008] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below DEPRECATION_WARNINGS(/etc/ansible/ansible.cfg) = False ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Linux ccm-lc-fin-304 5.3.0-46-generic #38-Ubuntu SMP Fri Mar 27 17:37:05 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> sudo ansible-pull -U <git repo url> <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> I expect this to run with no warnings ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> I get the warning: [WARNING]: Could not match supplied host pattern, ignoring: <hostname> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/69014
https://github.com/ansible/ansible/pull/76965
6d2d476113b3a26e46c9917e213f09494fbc0a13
d4c9c103e2884dd88876909ffe8fda2fc776811a
2020-04-17T21:07:09Z
python
2022-02-07T21:10:30Z
lib/ansible/cli/pull.py
#!/usr/bin/env python # Copyright: (c) 2012, Michael DeHaan <[email protected]> # Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # PYTHON_ARGCOMPLETE_OK from __future__ import (absolute_import, division, print_function) __metaclass__ = type # ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first from ansible.cli import CLI import datetime import os import platform import random import shlex import shutil import socket import sys import time from ansible import constants as C from ansible import context from ansible.cli.arguments import option_helpers as opt_help from ansible.errors import AnsibleOptionsError from ansible.module_utils._text import to_native, to_text from ansible.plugins.loader import module_loader from ansible.utils.cmd_functions import run_cmd from ansible.utils.display import Display display = Display() class PullCLI(CLI): ''' Used to pull a remote copy of ansible on each managed node, each set to run via cron and update playbook source via a source repository. This inverts the default *push* architecture of ansible into a *pull* architecture, which has near-limitless scaling potential. The setup playbook can be tuned to change the cron frequency, logging locations, and parameters to ansible-pull. This is useful both for extreme scale-out as well as periodic remediation. Usage of the 'fetch' module to retrieve logs from ansible-pull runs would be an excellent way to gather and analyze remote logs from ansible-pull. ''' name = 'ansible-pull' DEFAULT_REPO_TYPE = 'git' DEFAULT_PLAYBOOK = 'local.yml' REPO_CHOICES = ('git', 'subversion', 'hg', 'bzr') PLAYBOOK_ERRORS = { 1: 'File does not exist', 2: 'File is not readable', } ARGUMENTS = {'playbook.yml': 'The name of one the YAML format files to run as an Ansible playbook.' 'This can be a relative path within the checkout. By default, Ansible will' "look for a playbook based on the host's fully-qualified domain name," 'on the host hostname and finally a playbook named *local.yml*.', } SKIP_INVENTORY_DEFAULTS = True @staticmethod def _get_inv_cli(): inv_opts = '' if context.CLIARGS.get('inventory', False): for inv in context.CLIARGS['inventory']: if isinstance(inv, list): inv_opts += " -i '%s' " % ','.join(inv) elif ',' in inv or os.path.exists(inv): inv_opts += ' -i %s ' % inv return inv_opts def init_parser(self): ''' create an options parser for bin/ansible ''' super(PullCLI, self).init_parser( usage='%prog -U <repository> [options] [<playbook.yml>]', desc="pulls playbooks from a VCS repo and executes them for the local host") # Do not add check_options as there's a conflict with --checkout/-C opt_help.add_connect_options(self.parser) opt_help.add_vault_options(self.parser) opt_help.add_runtask_options(self.parser) opt_help.add_subset_options(self.parser) opt_help.add_inventory_options(self.parser) opt_help.add_module_options(self.parser) opt_help.add_runas_prompt_options(self.parser) self.parser.add_argument('args', help='Playbook(s)', metavar='playbook.yml', nargs='*') # options unique to pull self.parser.add_argument('--purge', default=False, action='store_true', help='purge checkout after playbook run') self.parser.add_argument('-o', '--only-if-changed', dest='ifchanged', default=False, action='store_true', help='only run the playbook if the repository has been updated') self.parser.add_argument('-s', '--sleep', dest='sleep', default=None, help='sleep for random interval (between 0 and n number of seconds) before starting. ' 'This is a useful way to disperse git requests') self.parser.add_argument('-f', '--force', dest='force', default=False, action='store_true', help='run the playbook even if the repository could not be updated') self.parser.add_argument('-d', '--directory', dest='dest', default=None, help='absolute path of repository checkout directory (relative paths are not supported)') self.parser.add_argument('-U', '--url', dest='url', default=None, help='URL of the playbook repository') self.parser.add_argument('--full', dest='fullclone', action='store_true', help='Do a full clone, instead of a shallow one.') self.parser.add_argument('-C', '--checkout', dest='checkout', help='branch/tag/commit to checkout. Defaults to behavior of repository module.') self.parser.add_argument('--accept-host-key', default=False, dest='accept_host_key', action='store_true', help='adds the hostkey for the repo url if not already added') self.parser.add_argument('-m', '--module-name', dest='module_name', default=self.DEFAULT_REPO_TYPE, help='Repository module name, which ansible will use to check out the repo. Choices are %s. Default is %s.' % (self.REPO_CHOICES, self.DEFAULT_REPO_TYPE)) self.parser.add_argument('--verify-commit', dest='verify', default=False, action='store_true', help='verify GPG signature of checked out commit, if it fails abort running the playbook. ' 'This needs the corresponding VCS module to support such an operation') self.parser.add_argument('--clean', dest='clean', default=False, action='store_true', help='modified files in the working repository will be discarded') self.parser.add_argument('--track-subs', dest='tracksubs', default=False, action='store_true', help='submodules will track the latest changes. This is equivalent to specifying the --remote flag to git submodule update') # add a subset of the check_opts flag group manually, as the full set's # shortcodes conflict with above --checkout/-C self.parser.add_argument("--check", default=False, dest='check', action='store_true', help="don't make any changes; instead, try to predict some of the changes that may occur") self.parser.add_argument("--diff", default=C.DIFF_ALWAYS, dest='diff', action='store_true', help="when changing (small) files and templates, show the differences in those files; works great with --check") def post_process_args(self, options): options = super(PullCLI, self).post_process_args(options) if not options.dest: hostname = socket.getfqdn() # use a hostname dependent directory, in case of $HOME on nfs options.dest = os.path.join('~/.ansible/pull', hostname) options.dest = os.path.expandvars(os.path.expanduser(options.dest)) if os.path.exists(options.dest) and not os.path.isdir(options.dest): raise AnsibleOptionsError("%s is not a valid or accessible directory." % options.dest) if options.sleep: try: secs = random.randint(0, int(options.sleep)) options.sleep = secs except ValueError: raise AnsibleOptionsError("%s is not a number." % options.sleep) if not options.url: raise AnsibleOptionsError("URL for repository not specified, use -h for help") if options.module_name not in self.REPO_CHOICES: raise AnsibleOptionsError("Unsupported repo module %s, choices are %s" % (options.module_name, ','.join(self.REPO_CHOICES))) display.verbosity = options.verbosity self.validate_conflicts(options) return options def run(self): ''' use Runner lib to do SSH things ''' super(PullCLI, self).run() # log command line now = datetime.datetime.now() display.display(now.strftime("Starting Ansible Pull at %F %T")) display.display(' '.join(sys.argv)) # Build Checkout command # Now construct the ansible command node = platform.node() host = socket.getfqdn() limit_opts = 'localhost,%s,127.0.0.1' % ','.join(set([host, node, host.split('.')[0], node.split('.')[0]])) base_opts = '-c local ' if context.CLIARGS['verbosity'] > 0: base_opts += ' -%s' % ''.join(["v" for x in range(0, context.CLIARGS['verbosity'])]) # Attempt to use the inventory passed in as an argument # It might not yet have been downloaded so use localhost as default inv_opts = self._get_inv_cli() if not inv_opts: inv_opts = " -i localhost, " # avoid interpreter discovery since we already know which interpreter to use on localhost inv_opts += '-e %s ' % shlex.quote('ansible_python_interpreter=%s' % sys.executable) # SCM specific options if context.CLIARGS['module_name'] == 'git': repo_opts = "name=%s dest=%s" % (context.CLIARGS['url'], context.CLIARGS['dest']) if context.CLIARGS['checkout']: repo_opts += ' version=%s' % context.CLIARGS['checkout'] if context.CLIARGS['accept_host_key']: repo_opts += ' accept_hostkey=yes' if context.CLIARGS['private_key_file']: repo_opts += ' key_file=%s' % context.CLIARGS['private_key_file'] if context.CLIARGS['verify']: repo_opts += ' verify_commit=yes' if context.CLIARGS['tracksubs']: repo_opts += ' track_submodules=yes' if not context.CLIARGS['fullclone']: repo_opts += ' depth=1' elif context.CLIARGS['module_name'] == 'subversion': repo_opts = "repo=%s dest=%s" % (context.CLIARGS['url'], context.CLIARGS['dest']) if context.CLIARGS['checkout']: repo_opts += ' revision=%s' % context.CLIARGS['checkout'] if not context.CLIARGS['fullclone']: repo_opts += ' export=yes' elif context.CLIARGS['module_name'] == 'hg': repo_opts = "repo=%s dest=%s" % (context.CLIARGS['url'], context.CLIARGS['dest']) if context.CLIARGS['checkout']: repo_opts += ' revision=%s' % context.CLIARGS['checkout'] elif context.CLIARGS['module_name'] == 'bzr': repo_opts = "name=%s dest=%s" % (context.CLIARGS['url'], context.CLIARGS['dest']) if context.CLIARGS['checkout']: repo_opts += ' version=%s' % context.CLIARGS['checkout'] else: raise AnsibleOptionsError('Unsupported (%s) SCM module for pull, choices are: %s' % (context.CLIARGS['module_name'], ','.join(self.REPO_CHOICES))) # options common to all supported SCMS if context.CLIARGS['clean']: repo_opts += ' force=yes' path = module_loader.find_plugin(context.CLIARGS['module_name']) if path is None: raise AnsibleOptionsError(("module '%s' not found.\n" % context.CLIARGS['module_name'])) bin_path = os.path.dirname(os.path.abspath(sys.argv[0])) # hardcode local and inventory/host as this is just meant to fetch the repo cmd = '%s/ansible %s %s -m %s -a "%s" all -l "%s"' % (bin_path, inv_opts, base_opts, context.CLIARGS['module_name'], repo_opts, limit_opts) for ev in context.CLIARGS['extra_vars']: cmd += ' -e %s' % shlex.quote(ev) # Nap? if context.CLIARGS['sleep']: display.display("Sleeping for %d seconds..." % context.CLIARGS['sleep']) time.sleep(context.CLIARGS['sleep']) # RUN the Checkout command display.debug("running ansible with VCS module to checkout repo") display.vvvv('EXEC: %s' % cmd) rc, b_out, b_err = run_cmd(cmd, live=True) if rc != 0: if context.CLIARGS['force']: display.warning("Unable to update repository. Continuing with (forced) run of playbook.") else: return rc elif context.CLIARGS['ifchanged'] and b'"changed": true' not in b_out: display.display("Repository has not changed, quitting.") return 0 playbook = self.select_playbook(context.CLIARGS['dest']) if playbook is None: raise AnsibleOptionsError("Could not find a playbook to run.") # Build playbook command cmd = '%s/ansible-playbook %s %s' % (bin_path, base_opts, playbook) if context.CLIARGS['vault_password_files']: for vault_password_file in context.CLIARGS['vault_password_files']: cmd += " --vault-password-file=%s" % vault_password_file if context.CLIARGS['vault_ids']: for vault_id in context.CLIARGS['vault_ids']: cmd += " --vault-id=%s" % vault_id for ev in context.CLIARGS['extra_vars']: cmd += ' -e %s' % shlex.quote(ev) if context.CLIARGS['become_ask_pass']: cmd += ' --ask-become-pass' if context.CLIARGS['skip_tags']: cmd += ' --skip-tags "%s"' % to_native(u','.join(context.CLIARGS['skip_tags'])) if context.CLIARGS['tags']: cmd += ' -t "%s"' % to_native(u','.join(context.CLIARGS['tags'])) if context.CLIARGS['subset']: cmd += ' -l "%s"' % context.CLIARGS['subset'] else: cmd += ' -l "%s"' % limit_opts if context.CLIARGS['check']: cmd += ' -C' if context.CLIARGS['diff']: cmd += ' -D' os.chdir(context.CLIARGS['dest']) # redo inventory options as new files might exist now inv_opts = self._get_inv_cli() if inv_opts: cmd += inv_opts # RUN THE PLAYBOOK COMMAND display.debug("running ansible-playbook to do actual work") display.debug('EXEC: %s' % cmd) rc, b_out, b_err = run_cmd(cmd, live=True) if context.CLIARGS['purge']: os.chdir('/') try: shutil.rmtree(context.CLIARGS['dest']) except Exception as e: display.error(u"Failed to remove %s: %s" % (context.CLIARGS['dest'], to_text(e))) return rc @staticmethod def try_playbook(path): if not os.path.exists(path): return 1 if not os.access(path, os.R_OK): return 2 return 0 @staticmethod def select_playbook(path): playbook = None errors = [] if context.CLIARGS['args'] and context.CLIARGS['args'][0] is not None: playbooks = [] for book in context.CLIARGS['args']: book_path = os.path.join(path, book) rc = PullCLI.try_playbook(book_path) if rc != 0: errors.append("%s: %s" % (book_path, PullCLI.PLAYBOOK_ERRORS[rc])) continue playbooks.append(book_path) if 0 < len(errors): display.warning("\n".join(errors)) elif len(playbooks) == len(context.CLIARGS['args']): playbook = " ".join(playbooks) return playbook else: fqdn = socket.getfqdn() hostpb = os.path.join(path, fqdn + '.yml') shorthostpb = os.path.join(path, fqdn.split('.')[0] + '.yml') localpb = os.path.join(path, PullCLI.DEFAULT_PLAYBOOK) for pb in [hostpb, shorthostpb, localpb]: rc = PullCLI.try_playbook(pb) if rc == 0: playbook = pb break else: errors.append("%s: %s" % (pb, PullCLI.PLAYBOOK_ERRORS[rc])) if playbook is None: display.warning("\n".join(errors)) return playbook def main(args=None): PullCLI.cli_executor(args) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
61,233
Template action does not recursively evaluate variables any more
<!--- Verify first that your improvement is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY Templates do not appear to be evaluated recursively anymore (see #4638). I don't know when it changed. I am using Ansible -devel or 2.8. If it helps, here is the reproducer from issue #4638: > [playbook.yml] > > ``` > --- > - name: "test raw template output" > hosts: localhost > > vars: > myvar: "foo" > > tasks: > - local_action: template src="mytemplate.j2" dest="/tmp/myoutput.txt" > > ``` > > [mytemplate.j2] > > ``` > Test: Preserve curly braces, and NOT perform variable substitution: > > First Attempt: > {% raw %} > - { include: "{{ myvar }}" } > {% endraw %} > ``` > > [/tmp/myoutput.txt] > > ``` > Test: Preserve curly braces, and NOT perform variable substitution: > > > First Attempt: > > - { include: "foo" } > ``` > > NOTE: template output performs variable substitution even inside the {% raw %} block. and here is the output *I* got: ``` Test: Preserve curly braces, and NOT perform variable substitution: First Attempt: - { include: "{{ myvar }}" } ``` At least, I really hope this is a documentation bug. I think the old behaviour was a bad idea. I would not like Ansible to go back to it. ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME <!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure --> template ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible 2.9.0.dev0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/alan-sysop/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /home/alan-sysop/src/ansible/lib/ansible executable location = /home/alan-sysop/src/ansible/bin/ansible python version = 2.7.16 (default, Apr 30 2019, 15:54:43) [GCC 9.0.1 20190312 (Red Hat 9.0.1-0.10)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> `ansible-config dump --only-changed` shows no output at all. ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. OS version, browser, etc. --> Fedora Workstation 30 ##### ADDITIONAL INFORMATION I'm hoping this is all ancient history now. If so, we can simply remove the sentence in question: > * Using raw/endraw in Jinja2 will not work as you expect because templates in Ansible are recursively evaluated. https://docs.ansible.com/ansible/latest/modules/template_module.html#notes
https://github.com/ansible/ansible/issues/61233
https://github.com/ansible/ansible/pull/76955
0d5401d950346d4398b1f8e1ab80e63e33a8266b
8582df36c5931a488d36bc2efaf8fa5324668538
2019-08-23T15:07:11Z
python
2022-02-08T15:26:37Z
lib/ansible/plugins/doc_fragments/template_common.py
# -*- coding: utf-8 -*- # Copyright (c) 2019 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type class ModuleDocFragment(object): # Standard template documentation fragment, use by template and win_template. DOCUMENTATION = r''' description: - Templates are processed by the L(Jinja2 templating language,http://jinja.pocoo.org/docs/). - Documentation on the template formatting can be found in the L(Template Designer Documentation,http://jinja.pocoo.org/docs/templates/). - Additional variables listed below can be used in templates. - C(ansible_managed) (configurable via the C(defaults) section of C(ansible.cfg)) contains a string which can be used to describe the template name, host, modification time of the template file and the owner uid. - C(template_host) contains the node name of the template's machine. - C(template_uid) is the numeric user id of the owner. - C(template_path) is the path of the template. - C(template_fullpath) is the absolute path of the template. - C(template_destpath) is the path of the template on the remote system (added in 2.8). - C(template_run_date) is the date that the template was rendered. options: src: description: - Path of a Jinja2 formatted template on the Ansible controller. - This can be a relative or an absolute path. - The file must be encoded with C(utf-8) but I(output_encoding) can be used to control the encoding of the output template. type: path required: yes dest: description: - Location to render the template to on the remote machine. type: path required: yes newline_sequence: description: - Specify the newline sequence to use for templating files. type: str choices: [ '\n', '\r', '\r\n' ] default: '\n' version_added: '2.4' block_start_string: description: - The string marking the beginning of a block. type: str default: '{%' version_added: '2.4' block_end_string: description: - The string marking the end of a block. type: str default: '%}' version_added: '2.4' variable_start_string: description: - The string marking the beginning of a print statement. type: str default: '{{' version_added: '2.4' variable_end_string: description: - The string marking the end of a print statement. type: str default: '}}' version_added: '2.4' comment_start_string: description: - The string marking the beginning of a comment statement. type: str version_added: '2.12' comment_end_string: description: - The string marking the end of a comment statement. type: str version_added: '2.12' trim_blocks: description: - Determine when newlines should be removed from blocks. - When set to C(yes) the first newline after a block is removed (block, not variable tag!). type: bool default: yes version_added: '2.4' lstrip_blocks: description: - Determine when leading spaces and tabs should be stripped. - When set to C(yes) leading spaces and tabs are stripped from the start of a line to a block. type: bool default: no version_added: '2.6' force: description: - Determine when the file is being transferred if the destination already exists. - When set to C(yes), replace the remote file when contents are different than the source. - When set to C(no), the file will only be transferred if the destination does not exist. type: bool default: yes output_encoding: description: - Overrides the encoding used to write the template file defined by C(dest). - It defaults to C(utf-8), but any encoding supported by python can be used. - The source template file must always be encoded using C(utf-8), for homogeneity. type: str default: utf-8 version_added: '2.7' notes: - Including a string that uses a date in the template will result in the template being marked 'changed' each time. - Since Ansible 0.9, templates are loaded with C(trim_blocks=True). - > Also, you can override jinja2 settings by adding a special header to template file. i.e. C(#jinja2:variable_start_string:'[%', variable_end_string:'%]', trim_blocks: False) which changes the variable interpolation markers to C([% var %]) instead of C({{ var }}). This is the best way to prevent evaluation of things that look like, but should not be Jinja2. - Using raw/endraw in Jinja2 will not work as you expect because templates in Ansible are recursively evaluated. - To find Byte Order Marks in files, use C(Format-Hex <file> -Count 16) on Windows, and use C(od -a -t x1 -N 16 <file>) on Linux. '''
closed
ansible/ansible
https://github.com/ansible/ansible
58,889
Using set_fact for ansible_password exposes password with warning
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY We deploy to both Linux and Windows servers. For Linux servers, we use ssh with key pairs, then sudo to become the target user. For Windows servers, we use winrm/ntlm with a common ID and have to supply a password. The password needs to be set in a vault. The ansible_password variable cannot be set globally because it breaks the ssh key/pair logins. Our solution was to create a variable that goes in the vault named WIN_PASS. The gather_facts stage fails on the Windows servers the ansible_password isn't set. So we start our playbook by setting ansible_password for those servers using set_fact and gather_facts: no. This works without warning. However, when we proceed to the next play, we get a warning at the play level: **[WARNING]: Removed restricted key from module data: ansible_password = ...**. The variable still contains the password and the tasks successfully authenticate with the windows host. The only issue is that the warning exposes the password. I have seen this issue reported and duplicated several times (for example: set_fact ansible_ssh_common_args fails #37535), but all are now closed. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> task_executor.py ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.7.10 config file = /<custompath>/ansible.cfg configured module search path = [u'/<custompath>/ansible_core/ansible/modules', u'/<custompath>/ansible/ansible_cust/modules'] ansible python module location = /<custompath>/ansible_core/ansible executable location = /<custompath>/ansible_core/bin/ansible python version = 2.7.15 (default, May 3 2019, 15:19:10) [GCC 4.4.7 20120313 (Red Hat 4.4.7-23)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below [plays]$ ansible-config dump --only-changed CACHE_PLUGIN(/<custompath>/ansible/ansible.cfg) = memory DEFAULT_ACTION_PLUGIN_PATH(/<custompath>/ansible/ansible.cfg) = [u'/<custompath>/ansible/ansibl DEFAULT_CALLBACK_PLUGIN_PATH(/<custompath>/ansible/ansible.cfg) = [u'/<custompath>/ansible/ansi DEFAULT_CALLBACK_WHITELIST(/<custompath>/ansible/ansible.cfg) = [u'q_callback:service_callback'] DEFAULT_FACT_PATH(/<custompath>/ansible/ansible.cfg) = /<custompath>/facts DEFAULT_FORKS(/<custompath>/ansible/ansible.cfg) = 50 DEFAULT_GATHER_TIMEOUT(/<custompath>/ansible/ansible.cfg) = 20 DEFAULT_HASH_BEHAVIOUR(/<custompath>/ansible/ansible.cfg) = merge DEFAULT_INVENTORY_PLUGIN_PATH(/<custompath>/ansible/ansible.cfg) = [u'/<custompath>/ansible/ans DEFAULT_MODULE_PATH(/<custompath>/ansible/ansible.cfg) = [u'/<custompath>/ansible_core/ansible/ DEFAULT_MODULE_UTILS_PATH(/<custompath>/ansible/ansible.cfg) = [u'/<custompath>/ansible/ansible DEFAULT_ROLES_PATH(/<custompath>/ansible/ansible.cfg) = [u'/<custompath>/ansible/roles', u'/opt HOST_KEY_CHECKING(/<custompath>/ansible/ansible.cfg) = False INVENTORY_ENABLED(/<custompath>/ansible/ansible.cfg) = [u'test_inv', u'host_list', u'script', u'yam (END) ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> RHEL 6.10 and Windows 10 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml --- - name: Prime Windows Facts hosts: conv gather_facts: no roles: - <role> tasks: - set_fact: ansible_password: "{{ WIN_PASS }}" cacheable: yes - name: Debug something hosts: conv,dmgr tasks: - debug: msg: "{{ ansible_system }}" ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> We don't see the password ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below PLAY [Prime Windows Facts] **************************************************************************** TASK [<role>: Including Passwords] ********************************************************* ok: [<host>] => { "msg": "Including Passwords for <env>" } TASK [<role> : include Environment Passwords] *********************************************** ok: [<host>] TASK [<role> : fact cache windows password] ************************************************* ok: [<host>] TASK [set_fact] *************************************************************************************** ok: [<host>] PLAY [Debug something] ******************************************************************************** [WARNING]: Removed restricted key from module data: ansible_password = REDACTED TASK [Gathering Facts] ******************************************************************************** ok: [<host>] ok: [<host> TASK [debug] ****************************************************************************************** ok: [<host>] => { "msg": "Win32NT" } ok: [<host>] => { "msg": "Linux" } ```
https://github.com/ansible/ansible/issues/58889
https://github.com/ansible/ansible/pull/76974
eb093ae7c30f2c18ca53c84b7833ea53ee2f1e04
47faa6e206ccd697b4050062147a5d3242435597
2019-07-09T21:06:29Z
python
2022-02-08T16:27:40Z
changelogs/fragments/clean_facts_values.yml
closed
ansible/ansible
https://github.com/ansible/ansible
58,889
Using set_fact for ansible_password exposes password with warning
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY We deploy to both Linux and Windows servers. For Linux servers, we use ssh with key pairs, then sudo to become the target user. For Windows servers, we use winrm/ntlm with a common ID and have to supply a password. The password needs to be set in a vault. The ansible_password variable cannot be set globally because it breaks the ssh key/pair logins. Our solution was to create a variable that goes in the vault named WIN_PASS. The gather_facts stage fails on the Windows servers the ansible_password isn't set. So we start our playbook by setting ansible_password for those servers using set_fact and gather_facts: no. This works without warning. However, when we proceed to the next play, we get a warning at the play level: **[WARNING]: Removed restricted key from module data: ansible_password = ...**. The variable still contains the password and the tasks successfully authenticate with the windows host. The only issue is that the warning exposes the password. I have seen this issue reported and duplicated several times (for example: set_fact ansible_ssh_common_args fails #37535), but all are now closed. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> task_executor.py ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.7.10 config file = /<custompath>/ansible.cfg configured module search path = [u'/<custompath>/ansible_core/ansible/modules', u'/<custompath>/ansible/ansible_cust/modules'] ansible python module location = /<custompath>/ansible_core/ansible executable location = /<custompath>/ansible_core/bin/ansible python version = 2.7.15 (default, May 3 2019, 15:19:10) [GCC 4.4.7 20120313 (Red Hat 4.4.7-23)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below [plays]$ ansible-config dump --only-changed CACHE_PLUGIN(/<custompath>/ansible/ansible.cfg) = memory DEFAULT_ACTION_PLUGIN_PATH(/<custompath>/ansible/ansible.cfg) = [u'/<custompath>/ansible/ansibl DEFAULT_CALLBACK_PLUGIN_PATH(/<custompath>/ansible/ansible.cfg) = [u'/<custompath>/ansible/ansi DEFAULT_CALLBACK_WHITELIST(/<custompath>/ansible/ansible.cfg) = [u'q_callback:service_callback'] DEFAULT_FACT_PATH(/<custompath>/ansible/ansible.cfg) = /<custompath>/facts DEFAULT_FORKS(/<custompath>/ansible/ansible.cfg) = 50 DEFAULT_GATHER_TIMEOUT(/<custompath>/ansible/ansible.cfg) = 20 DEFAULT_HASH_BEHAVIOUR(/<custompath>/ansible/ansible.cfg) = merge DEFAULT_INVENTORY_PLUGIN_PATH(/<custompath>/ansible/ansible.cfg) = [u'/<custompath>/ansible/ans DEFAULT_MODULE_PATH(/<custompath>/ansible/ansible.cfg) = [u'/<custompath>/ansible_core/ansible/ DEFAULT_MODULE_UTILS_PATH(/<custompath>/ansible/ansible.cfg) = [u'/<custompath>/ansible/ansible DEFAULT_ROLES_PATH(/<custompath>/ansible/ansible.cfg) = [u'/<custompath>/ansible/roles', u'/opt HOST_KEY_CHECKING(/<custompath>/ansible/ansible.cfg) = False INVENTORY_ENABLED(/<custompath>/ansible/ansible.cfg) = [u'test_inv', u'host_list', u'script', u'yam (END) ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> RHEL 6.10 and Windows 10 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml --- - name: Prime Windows Facts hosts: conv gather_facts: no roles: - <role> tasks: - set_fact: ansible_password: "{{ WIN_PASS }}" cacheable: yes - name: Debug something hosts: conv,dmgr tasks: - debug: msg: "{{ ansible_system }}" ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> We don't see the password ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below PLAY [Prime Windows Facts] **************************************************************************** TASK [<role>: Including Passwords] ********************************************************* ok: [<host>] => { "msg": "Including Passwords for <env>" } TASK [<role> : include Environment Passwords] *********************************************** ok: [<host>] TASK [<role> : fact cache windows password] ************************************************* ok: [<host>] TASK [set_fact] *************************************************************************************** ok: [<host>] PLAY [Debug something] ******************************************************************************** [WARNING]: Removed restricted key from module data: ansible_password = REDACTED TASK [Gathering Facts] ******************************************************************************** ok: [<host>] ok: [<host> TASK [debug] ****************************************************************************************** ok: [<host>] => { "msg": "Win32NT" } ok: [<host>] => { "msg": "Linux" } ```
https://github.com/ansible/ansible/issues/58889
https://github.com/ansible/ansible/pull/76974
eb093ae7c30f2c18ca53c84b7833ea53ee2f1e04
47faa6e206ccd697b4050062147a5d3242435597
2019-07-09T21:06:29Z
python
2022-02-08T16:27:40Z
lib/ansible/vars/clean.py
# Copyright (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import os import re from ansible import constants as C from ansible.errors import AnsibleError from ansible.module_utils import six from ansible.module_utils._text import to_text from ansible.module_utils.common._collections_compat import MutableMapping, MutableSequence from ansible.plugins.loader import connection_loader from ansible.utils.display import Display display = Display() def module_response_deepcopy(v): """Function to create a deep copy of module response data Designed to be used within the Ansible "engine" to improve performance issues where ``copy.deepcopy`` was used previously, largely with CPU and memory contention. This only supports the following data types, and was designed to only handle specific workloads: * ``dict`` * ``list`` The data we pass here will come from a serialization such as JSON, so we shouldn't have need for other data types such as ``set`` or ``tuple``. Take note that this function should not be used extensively as a replacement for ``deepcopy`` due to the naive way in which this handles other data types. Do not expect uses outside of those listed below to maintain backwards compatibility, in case we need to extend this function to handle our specific needs: * ``ansible.executor.task_result.TaskResult.clean_copy`` * ``ansible.vars.clean.clean_facts`` * ``ansible.vars.namespace_facts`` """ if isinstance(v, dict): ret = v.copy() items = six.iteritems(ret) elif isinstance(v, list): ret = v[:] items = enumerate(ret) else: return v for key, value in items: if isinstance(value, (dict, list)): ret[key] = module_response_deepcopy(value) else: ret[key] = value return ret def strip_internal_keys(dirty, exceptions=None): # All keys starting with _ansible_ are internal, so change the 'dirty' mapping and remove them. if exceptions is None: exceptions = tuple() if isinstance(dirty, MutableSequence): for element in dirty: if isinstance(element, (MutableMapping, MutableSequence)): strip_internal_keys(element, exceptions=exceptions) elif isinstance(dirty, MutableMapping): # listify to avoid updating dict while iterating over it for k in list(dirty.keys()): if isinstance(k, six.string_types): if k.startswith('_ansible_') and k not in exceptions: del dirty[k] continue if isinstance(dirty[k], (MutableMapping, MutableSequence)): strip_internal_keys(dirty[k], exceptions=exceptions) else: raise AnsibleError("Cannot strip invalid keys from %s" % type(dirty)) return dirty def remove_internal_keys(data): ''' More nuanced version of strip_internal_keys ''' for key in list(data.keys()): if (key.startswith('_ansible_') and key != '_ansible_parsed') or key in C.INTERNAL_RESULT_KEYS: display.warning("Removed unexpected internal key in module return: %s = %s" % (key, data[key])) del data[key] # remove bad/empty internal keys for key in ['warnings', 'deprecations']: if key in data and not data[key]: del data[key] # cleanse fact values that are allowed from actions but not modules for key in list(data.get('ansible_facts', {}).keys()): if key.startswith('discovered_interpreter_') or key.startswith('ansible_discovered_interpreter_'): del data['ansible_facts'][key] def clean_facts(facts): ''' remove facts that can override internal keys or otherwise deemed unsafe ''' data = module_response_deepcopy(facts) remove_keys = set() fact_keys = set(data.keys()) # first we add all of our magic variable names to the set of # keys we want to remove from facts # NOTE: these will eventually disappear in favor of others below for magic_var in C.MAGIC_VARIABLE_MAPPING: remove_keys.update(fact_keys.intersection(C.MAGIC_VARIABLE_MAPPING[magic_var])) # remove common connection vars remove_keys.update(fact_keys.intersection(C.COMMON_CONNECTION_VARS)) # next we remove any connection plugin specific vars for conn_path in connection_loader.all(path_only=True): conn_name = os.path.splitext(os.path.basename(conn_path))[0] re_key = re.compile('^ansible_%s_' % re.escape(conn_name)) for fact_key in fact_keys: # most lightweight VM or container tech creates devices with this pattern, this avoids filtering them out if (re_key.match(fact_key) and not fact_key.endswith(('_bridge', '_gwbridge'))) or fact_key.startswith('ansible_become_'): remove_keys.add(fact_key) # remove some KNOWN keys for hard in C.RESTRICTED_RESULT_KEYS + C.INTERNAL_RESULT_KEYS: if hard in fact_keys: remove_keys.add(hard) # finally, we search for interpreter keys to remove re_interp = re.compile('^ansible_.*_interpreter$') for fact_key in fact_keys: if re_interp.match(fact_key): remove_keys.add(fact_key) # then we remove them (except for ssh host keys) for r_key in remove_keys: if not r_key.startswith('ansible_ssh_host_key_'): try: r_val = to_text(data[r_key]) if len(r_val) > 24: r_val = '%s ... %s' % (r_val[:13], r_val[-6:]) except Exception: r_val = ' <failed to convert value to a string> ' display.warning("Removed restricted key from module data: %s = %s" % (r_key, r_val)) del data[r_key] return strip_internal_keys(data) def namespace_facts(facts): ''' return all facts inside 'ansible_facts' w/o an ansible_ prefix ''' deprefixed = {} for k in facts: if k.startswith('ansible_') and k not in ('ansible_local',): deprefixed[k[8:]] = module_response_deepcopy(facts[k]) else: deprefixed[k] = module_response_deepcopy(facts[k]) return {'ansible_facts': deprefixed}
closed
ansible/ansible
https://github.com/ansible/ansible
76,867
service_facts module quietly returns incomplete services when privileges are wrong
### Summary If you fail to add `become: true` on an Ubuntu system with systemd, you will quietly receive a list of the `sysv` services without any systemd services. This is due to the service facts module implementing its own separate test for systemd https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/service_facts.py#L224-L234 which quietly ignores a failure. If it were to use the value of the ansible fact `ansible_service_mgr` instead then it would correctly catch this failure and report the warning. ### Issue Type Bug Report ### Component Name service_facts module ### Ansible Version ```console $ ansible --version ansible [core 2.12.1] config file = None configured module search path = ['/Users/jrhett/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /opt/homebrew/Cellar/ansible/5.2.0/libexec/lib/python3.10/site-packages/ansible ansible collection location = /Users/jrhett/.ansible/collections:/usr/share/ansible/collections executable location = /opt/homebrew/bin/ansible python version = 3.10.1 (main, Dec 6 2021, 22:18:13) [Clang 13.0.0 (clang-1300.0.29.3)] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed (empty) ``` ### OS / Environment Target node is Ubuntu 20.04 ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```shell $ ansible -m service_facts <node ip> ``` ### Expected Results `WARNING: Could not find status for all services. Sometimes this is due to insufficient privileges.` ### Actual Results ```console A quiet success with only sysv services. 52.14.195.85 | SUCCESS => { "ansible_facts": { "services": { "acpid": { "name": "acpid", "source": "sysv", "state": "stopped" }, "apparmor": { "name": "apparmor", "source": "sysv", "state": "stopped" }, ``` ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76867
https://github.com/ansible/ansible/pull/76921
47faa6e206ccd697b4050062147a5d3242435597
699ecb83082b3b24a1a0a21cfadf19eda8891bff
2022-01-27T21:51:57Z
python
2022-02-08T18:05:21Z
changelogs/fragments/service_facts_warnings.yml
closed
ansible/ansible
https://github.com/ansible/ansible
76,867
service_facts module quietly returns incomplete services when privileges are wrong
### Summary If you fail to add `become: true` on an Ubuntu system with systemd, you will quietly receive a list of the `sysv` services without any systemd services. This is due to the service facts module implementing its own separate test for systemd https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/service_facts.py#L224-L234 which quietly ignores a failure. If it were to use the value of the ansible fact `ansible_service_mgr` instead then it would correctly catch this failure and report the warning. ### Issue Type Bug Report ### Component Name service_facts module ### Ansible Version ```console $ ansible --version ansible [core 2.12.1] config file = None configured module search path = ['/Users/jrhett/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /opt/homebrew/Cellar/ansible/5.2.0/libexec/lib/python3.10/site-packages/ansible ansible collection location = /Users/jrhett/.ansible/collections:/usr/share/ansible/collections executable location = /opt/homebrew/bin/ansible python version = 3.10.1 (main, Dec 6 2021, 22:18:13) [Clang 13.0.0 (clang-1300.0.29.3)] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed (empty) ``` ### OS / Environment Target node is Ubuntu 20.04 ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```shell $ ansible -m service_facts <node ip> ``` ### Expected Results `WARNING: Could not find status for all services. Sometimes this is due to insufficient privileges.` ### Actual Results ```console A quiet success with only sysv services. 52.14.195.85 | SUCCESS => { "ansible_facts": { "services": { "acpid": { "name": "acpid", "source": "sysv", "state": "stopped" }, "apparmor": { "name": "apparmor", "source": "sysv", "state": "stopped" }, ``` ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76867
https://github.com/ansible/ansible/pull/76921
47faa6e206ccd697b4050062147a5d3242435597
699ecb83082b3b24a1a0a21cfadf19eda8891bff
2022-01-27T21:51:57Z
python
2022-02-08T18:05:21Z
lib/ansible/modules/service_facts.py
# Copyright: (c) 2017, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # originally copied from AWX's scan_services module to bring this functionality # into Core from __future__ import absolute_import, division, print_function __metaclass__ = type DOCUMENTATION = r''' --- module: service_facts short_description: Return service state information as fact data description: - Return service state information as fact data for various service management utilities. version_added: "2.5" requirements: ["Any of the following supported init systems: systemd, sysv, upstart, openrc, AIX SRC"] extends_documentation_fragment: - action_common_attributes - action_common_attributes.facts attributes: check_mode: support: full diff_mode: support: none facts: support: full platform: platforms: posix notes: - When accessing the C(ansible_facts.services) facts collected by this module, it is recommended to not use "dot notation" because services can have a C(-) character in their name which would result in invalid "dot notation", such as C(ansible_facts.services.zuul-gateway). It is instead recommended to using the string value of the service name as the key in order to obtain the fact data value like C(ansible_facts.services['zuul-gateway']) - AIX SRC was added in version 2.11. author: - Adam Miller (@maxamillion) ''' EXAMPLES = r''' - name: Populate service facts ansible.builtin.service_facts: - name: Print service facts ansible.builtin.debug: var: ansible_facts.services ''' RETURN = r''' ansible_facts: description: Facts to add to ansible_facts about the services on the system returned: always type: complex contains: services: description: States of the services with service name as key. returned: always type: complex contains: source: description: - Init system of the service. - One of C(rcctl), C(systemd), C(sysv), C(upstart), C(src). returned: always type: str sample: sysv state: description: - State of the service. - 'This commonly includes (but is not limited to) the following: C(failed), C(running), C(stopped) or C(unknown).' - Depending on the used init system additional states might be returned. returned: always type: str sample: running status: description: - State of the service. - Either C(enabled), C(disabled), C(static), C(indirect) or C(unknown). returned: systemd systems or RedHat/SUSE flavored sysvinit/upstart or OpenBSD type: str sample: enabled name: description: Name of the service. returned: always type: str sample: arp-ethers.service ''' import platform import re from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.common.locale import get_best_parsable_locale class BaseService(object): def __init__(self, module): self.module = module self.incomplete_warning = False class ServiceScanService(BaseService): def gather_services(self): services = {} service_path = self.module.get_bin_path("service") if service_path is None: return None initctl_path = self.module.get_bin_path("initctl") chkconfig_path = self.module.get_bin_path("chkconfig") rc_status_path = self.module.get_bin_path("rc-status") rc_update_path = self.module.get_bin_path("rc-update") # sysvinit if service_path is not None and chkconfig_path is None and rc_status_path is None: rc, stdout, stderr = self.module.run_command("%s --status-all 2>&1 | grep -E \"\\[ (\\+|\\-) \\]\"" % service_path, use_unsafe_shell=True) for line in stdout.split("\n"): line_data = line.split() if len(line_data) < 4: continue # Skipping because we expected more data service_name = " ".join(line_data[3:]) if line_data[1] == "+": service_state = "running" else: service_state = "stopped" services[service_name] = {"name": service_name, "state": service_state, "source": "sysv"} # Upstart if initctl_path is not None and chkconfig_path is None: p = re.compile(r'^\s?(?P<name>.*)\s(?P<goal>\w+)\/(?P<state>\w+)(\,\sprocess\s(?P<pid>[0-9]+))?\s*$') rc, stdout, stderr = self.module.run_command("%s list" % initctl_path) real_stdout = stdout.replace("\r", "") for line in real_stdout.split("\n"): m = p.match(line) if not m: continue service_name = m.group('name') service_goal = m.group('goal') service_state = m.group('state') if m.group('pid'): pid = m.group('pid') else: pid = None # NOQA payload = {"name": service_name, "state": service_state, "goal": service_goal, "source": "upstart"} services[service_name] = payload # RH sysvinit elif chkconfig_path is not None: # print '%s --status-all | grep -E "is (running|stopped)"' % service_path p = re.compile( r'(?P<service>.*?)\s+[0-9]:(?P<rl0>on|off)\s+[0-9]:(?P<rl1>on|off)\s+[0-9]:(?P<rl2>on|off)\s+' r'[0-9]:(?P<rl3>on|off)\s+[0-9]:(?P<rl4>on|off)\s+[0-9]:(?P<rl5>on|off)\s+[0-9]:(?P<rl6>on|off)') rc, stdout, stderr = self.module.run_command('%s' % chkconfig_path, use_unsafe_shell=True) # Check for special cases where stdout does not fit pattern match_any = False for line in stdout.split('\n'): if p.match(line): match_any = True if not match_any: p_simple = re.compile(r'(?P<service>.*?)\s+(?P<rl0>on|off)') match_any = False for line in stdout.split('\n'): if p_simple.match(line): match_any = True if match_any: # Try extra flags " -l --allservices" needed for SLES11 rc, stdout, stderr = self.module.run_command('%s -l --allservices' % chkconfig_path, use_unsafe_shell=True) elif '--list' in stderr: # Extra flag needed for RHEL5 rc, stdout, stderr = self.module.run_command('%s --list' % chkconfig_path, use_unsafe_shell=True) for line in stdout.split('\n'): m = p.match(line) if m: service_name = m.group('service') service_state = 'stopped' service_status = "disabled" if m.group('rl3') == 'on': service_status = "enabled" rc, stdout, stderr = self.module.run_command('%s %s status' % (service_path, service_name), use_unsafe_shell=True) service_state = rc if rc in (0,): service_state = 'running' # elif rc in (1,3): else: if 'root' in stderr or 'permission' in stderr.lower() or 'not in sudoers' in stderr.lower(): self.incomplete_warning = True continue else: service_state = 'stopped' service_data = {"name": service_name, "state": service_state, "status": service_status, "source": "sysv"} services[service_name] = service_data # openrc elif rc_status_path is not None and rc_update_path is not None: all_services_runlevels = {} rc, stdout, stderr = self.module.run_command("%s -a -s -m 2>&1 | grep '^ ' | tr -d '[]'" % rc_status_path, use_unsafe_shell=True) rc_u, stdout_u, stderr_u = self.module.run_command("%s show -v 2>&1 | grep '|'" % rc_update_path, use_unsafe_shell=True) for line in stdout_u.split('\n'): line_data = line.split('|') if len(line_data) < 2: continue service_name = line_data[0].strip() runlevels = line_data[1].strip() if not runlevels: all_services_runlevels[service_name] = None else: all_services_runlevels[service_name] = runlevels.split() for line in stdout.split('\n'): line_data = line.split() if len(line_data) < 2: continue service_name = line_data[0] service_state = line_data[1] service_runlevels = all_services_runlevels[service_name] service_data = {"name": service_name, "runlevels": service_runlevels, "state": service_state, "source": "openrc"} services[service_name] = service_data return services class SystemctlScanService(BaseService): def systemd_enabled(self): # Check if init is the systemd command, using comm as cmdline could be symlink try: f = open('/proc/1/comm', 'r') except IOError: # If comm doesn't exist, old kernel, no systemd return False for line in f: if 'systemd' in line: return True return False def gather_services(self): BAD_STATES = frozenset(['not-found', 'masked', 'failed']) services = {} if not self.systemd_enabled(): return None systemctl_path = self.module.get_bin_path("systemctl", opt_dirs=["/usr/bin", "/usr/local/bin"]) if systemctl_path is None: return None # list units as systemd sees them rc, stdout, stderr = self.module.run_command("%s list-units --no-pager --type service --all" % systemctl_path, use_unsafe_shell=True) for line in [svc_line for svc_line in stdout.split('\n') if '.service' in svc_line]: state_val = "stopped" status_val = "unknown" fields = line.split() for bad in BAD_STATES: if bad in fields: # dot is 0 status_val = bad fields = fields[1:] break else: # active/inactive status_val = fields[2] # array is normalize so predictable now service_name = fields[0] if fields[3] == "running": state_val = "running" services[service_name] = {"name": service_name, "state": state_val, "status": status_val, "source": "systemd"} # now try unit files for complete picture and final 'status' rc, stdout, stderr = self.module.run_command("%s list-unit-files --no-pager --type service --all" % systemctl_path, use_unsafe_shell=True) for line in [svc_line for svc_line in stdout.split('\n') if '.service' in svc_line]: # there is one more column (VENDOR PRESET) from `systemctl list-unit-files` for systemd >= 245 try: service_name, status_val = line.split()[:2] except IndexError: self.module.fail_json(msg="Malformed output discovered from systemd list-unit-files: {0}".format(line)) if service_name not in services: rc, stdout, stderr = self.module.run_command("%s show %s --property=ActiveState" % (systemctl_path, service_name), use_unsafe_shell=True) state = 'unknown' if not rc and stdout != '': state = stdout.replace('ActiveState=', '').rstrip() services[service_name] = {"name": service_name, "state": state, "status": status_val, "source": "systemd"} elif services[service_name]["status"] not in BAD_STATES: services[service_name]["status"] = status_val return services class AIXScanService(BaseService): def gather_services(self): services = {} if platform.system() != 'AIX': return None lssrc_path = self.module.get_bin_path("lssrc") if lssrc_path is None: return None rc, stdout, stderr = self.module.run_command("%s -a" % lssrc_path) for line in stdout.split('\n'): line_data = line.split() if len(line_data) < 2: continue # Skipping because we expected more data if line_data[0] == "Subsystem": continue # Skip header service_name = line_data[0] if line_data[-1] == "active": service_state = "running" elif line_data[-1] == "inoperative": service_state = "stopped" else: service_state = "unknown" services[service_name] = {"name": service_name, "state": service_state, "source": "src"} return services class OpenBSDScanService(BaseService): def query_rcctl(self, cmd): svcs = [] rc, stdout, stderr = self.module.run_command("%s ls %s" % (self.rcctl_path, cmd)) if 'needs root privileges' in stderr.lower(): self.incomplete_warning = True return [] for svc in stdout.split('\n'): if svc == '': continue else: svcs.append(svc) return svcs def gather_services(self): services = {} self.rcctl_path = self.module.get_bin_path("rcctl") if self.rcctl_path is None: return None for svc in self.query_rcctl('all'): services[svc] = {'name': svc, 'source': 'rcctl'} for svc in self.query_rcctl('on'): services[svc].update({'status': 'enabled'}) for svc in self.query_rcctl('started'): services[svc].update({'state': 'running'}) # Based on the list of services that are enabled, determine which are disabled [services[svc].update({'status': 'disabled'}) for svc in services if services[svc].get('status') is None] # and do the same for those are aren't running [services[svc].update({'state': 'stopped'}) for svc in services if services[svc].get('state') is None] # Override the state for services which are marked as 'failed' for svc in self.query_rcctl('failed'): services[svc].update({'state': 'failed'}) return services def main(): module = AnsibleModule(argument_spec=dict(), supports_check_mode=True) locale = get_best_parsable_locale(module) module.run_command_environ_update = dict(LANG=locale, LC_ALL=locale) service_modules = (ServiceScanService, SystemctlScanService, AIXScanService, OpenBSDScanService) all_services = {} incomplete_warning = False for svc_module in service_modules: svcmod = svc_module(module) svc = svcmod.gather_services() if svc is not None: all_services.update(svc) if svcmod.incomplete_warning: incomplete_warning = True if len(all_services) == 0: results = dict(skipped=True, msg="Failed to find any services. Sometimes this is due to insufficient privileges.") else: results = dict(ansible_facts=dict(services=all_services)) if incomplete_warning: results['msg'] = "WARNING: Could not find status for all services. Sometimes this is due to insufficient privileges." module.exit_json(**results) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
76,960
`ansible-test sanity` fails on `validate-modules` in a collection that contains no python modules and uses `meta/runtime.yml` in Ansible 2.12+
### Summary I am working with @lowlydba on an [MSSQL-focused collection](https://github.com/LowlyDBA/lowlydba.sqlserver), and we're getting the basics set up. We have hit the following error in the sanity tests for Ansible 2.12 and higher: ``` Running sanity test "validate-modules" ERROR: Command "/root/.ansible/test/venv/sanity.validate-modules/3.9/2e4ce301/bin/python /root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate-modules --format json --arg-spec plugins/modules/memory.ps1 plugins/modules/memory.py --collection ansible_collections/lowlydba/sqlserver --collection-version 0.0.0" returned exit status 0. >>> Standard Error [WARNING]: packaging Python module unavailable; unable to validate collection Ansible version requirements >>> Standard Output {} ``` This looks a lot like https://github.com/ansible/ansible/pull/76513 and https://github.com/ansible/ansible/issues/76504 , but is still happening in `devel` and `milestone` too. We have been troubleshooting it in https://github.com/LowlyDBA/lowlydba.sqlserver/issues/6 and have found the following: - Error goes away if there are no PowerShell modules present (the collection will be entirely PowerShell modules at this time) - Error goes away if `requires_ansible` is not present in `meta/runtime.yml` - Error goes away if a python module is included in the collection (a python plugin was not sufficient, tested with a filter plugin) ### Issue Type Bug Report ### Component Name ansible-test ### Ansible Version Affects `stable-2.12`, `milestone`, and `devel` branches. ### Configuration ```console $ ansible-config dump --only-changed n/a ``` ### OS / Environment WSL2 / Ubuntu 18.04 ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> With [the collection](https://github.com/LowlyDBA/lowlydba.sqlserver) checked out in the right directory structure: ```yaml (paste below) ansible-test sanity --docker default ``` ### Expected Results Clean run ### Actual Results ```console Running sanity test "validate-modules" ERROR: Command "/root/.ansible/test/venv/sanity.validate-modules/3.9/2e4ce301/bin/python /root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate-modules --format json --arg-spec plugins/modules/memory.ps1 plugins/modules/memory.py --collection ansible_collections/lowlydba/sqlserver --collection-version 0.0.0" returned exit status 0. >>> Standard Error [WARNING]: packaging Python module unavailable; unable to validate collection Ansible version requirements >>> Standard Output {} ERROR: Command "docker exec -it ansible-test-controller-EL88bWob /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible_collections/lowlydba/sqlserver LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test sanity --containers '{"control": {"__pypi_proxy__": {"pypi-test-container-EL88bWob": {"host_ip": "172.17.0.2", "names": ["pypi-test-container-EL88bWob"], "ports": [3141]}}}, "managed": {"__pypi_proxy__": {"pypi-test-container-EL88bWob": {"host_ip": "172.17.0.2", "names": ["pypi-test-container-EL88bWob"], "ports": [3141]}}}}' --metadata tests/output/.tmp/metadata-s3jmdrvh.json --truncate 198 --color yes --host-path tests/output/.tmp/host-zpgh4g5o" returned exit status 1. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76960
https://github.com/ansible/ansible/pull/76986
699ecb83082b3b24a1a0a21cfadf19eda8891bff
0d40423f1c1c7ca2e71f30c9eca60ca60af93ff2
2022-02-06T16:59:14Z
python
2022-02-08T19:38:00Z
changelogs/fragments/ansible-test-validate-modules-collection-loader.yml
closed
ansible/ansible
https://github.com/ansible/ansible
76,960
`ansible-test sanity` fails on `validate-modules` in a collection that contains no python modules and uses `meta/runtime.yml` in Ansible 2.12+
### Summary I am working with @lowlydba on an [MSSQL-focused collection](https://github.com/LowlyDBA/lowlydba.sqlserver), and we're getting the basics set up. We have hit the following error in the sanity tests for Ansible 2.12 and higher: ``` Running sanity test "validate-modules" ERROR: Command "/root/.ansible/test/venv/sanity.validate-modules/3.9/2e4ce301/bin/python /root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate-modules --format json --arg-spec plugins/modules/memory.ps1 plugins/modules/memory.py --collection ansible_collections/lowlydba/sqlserver --collection-version 0.0.0" returned exit status 0. >>> Standard Error [WARNING]: packaging Python module unavailable; unable to validate collection Ansible version requirements >>> Standard Output {} ``` This looks a lot like https://github.com/ansible/ansible/pull/76513 and https://github.com/ansible/ansible/issues/76504 , but is still happening in `devel` and `milestone` too. We have been troubleshooting it in https://github.com/LowlyDBA/lowlydba.sqlserver/issues/6 and have found the following: - Error goes away if there are no PowerShell modules present (the collection will be entirely PowerShell modules at this time) - Error goes away if `requires_ansible` is not present in `meta/runtime.yml` - Error goes away if a python module is included in the collection (a python plugin was not sufficient, tested with a filter plugin) ### Issue Type Bug Report ### Component Name ansible-test ### Ansible Version Affects `stable-2.12`, `milestone`, and `devel` branches. ### Configuration ```console $ ansible-config dump --only-changed n/a ``` ### OS / Environment WSL2 / Ubuntu 18.04 ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> With [the collection](https://github.com/LowlyDBA/lowlydba.sqlserver) checked out in the right directory structure: ```yaml (paste below) ansible-test sanity --docker default ``` ### Expected Results Clean run ### Actual Results ```console Running sanity test "validate-modules" ERROR: Command "/root/.ansible/test/venv/sanity.validate-modules/3.9/2e4ce301/bin/python /root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate-modules --format json --arg-spec plugins/modules/memory.ps1 plugins/modules/memory.py --collection ansible_collections/lowlydba/sqlserver --collection-version 0.0.0" returned exit status 0. >>> Standard Error [WARNING]: packaging Python module unavailable; unable to validate collection Ansible version requirements >>> Standard Output {} ERROR: Command "docker exec -it ansible-test-controller-EL88bWob /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible_collections/lowlydba/sqlserver LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test sanity --containers '{"control": {"__pypi_proxy__": {"pypi-test-container-EL88bWob": {"host_ip": "172.17.0.2", "names": ["pypi-test-container-EL88bWob"], "ports": [3141]}}}, "managed": {"__pypi_proxy__": {"pypi-test-container-EL88bWob": {"host_ip": "172.17.0.2", "names": ["pypi-test-container-EL88bWob"], "ports": [3141]}}}}' --metadata tests/output/.tmp/metadata-s3jmdrvh.json --truncate 198 --color yes --host-path tests/output/.tmp/host-zpgh4g5o" returned exit status 1. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76960
https://github.com/ansible/ansible/pull/76986
699ecb83082b3b24a1a0a21cfadf19eda8891bff
0d40423f1c1c7ca2e71f30c9eca60ca60af93ff2
2022-02-06T16:59:14Z
python
2022-02-08T19:38:00Z
test/integration/targets/ansible-test/ansible_collections/ns/ps_only/meta/runtime.yml
closed
ansible/ansible
https://github.com/ansible/ansible
76,960
`ansible-test sanity` fails on `validate-modules` in a collection that contains no python modules and uses `meta/runtime.yml` in Ansible 2.12+
### Summary I am working with @lowlydba on an [MSSQL-focused collection](https://github.com/LowlyDBA/lowlydba.sqlserver), and we're getting the basics set up. We have hit the following error in the sanity tests for Ansible 2.12 and higher: ``` Running sanity test "validate-modules" ERROR: Command "/root/.ansible/test/venv/sanity.validate-modules/3.9/2e4ce301/bin/python /root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate-modules --format json --arg-spec plugins/modules/memory.ps1 plugins/modules/memory.py --collection ansible_collections/lowlydba/sqlserver --collection-version 0.0.0" returned exit status 0. >>> Standard Error [WARNING]: packaging Python module unavailable; unable to validate collection Ansible version requirements >>> Standard Output {} ``` This looks a lot like https://github.com/ansible/ansible/pull/76513 and https://github.com/ansible/ansible/issues/76504 , but is still happening in `devel` and `milestone` too. We have been troubleshooting it in https://github.com/LowlyDBA/lowlydba.sqlserver/issues/6 and have found the following: - Error goes away if there are no PowerShell modules present (the collection will be entirely PowerShell modules at this time) - Error goes away if `requires_ansible` is not present in `meta/runtime.yml` - Error goes away if a python module is included in the collection (a python plugin was not sufficient, tested with a filter plugin) ### Issue Type Bug Report ### Component Name ansible-test ### Ansible Version Affects `stable-2.12`, `milestone`, and `devel` branches. ### Configuration ```console $ ansible-config dump --only-changed n/a ``` ### OS / Environment WSL2 / Ubuntu 18.04 ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> With [the collection](https://github.com/LowlyDBA/lowlydba.sqlserver) checked out in the right directory structure: ```yaml (paste below) ansible-test sanity --docker default ``` ### Expected Results Clean run ### Actual Results ```console Running sanity test "validate-modules" ERROR: Command "/root/.ansible/test/venv/sanity.validate-modules/3.9/2e4ce301/bin/python /root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate-modules --format json --arg-spec plugins/modules/memory.ps1 plugins/modules/memory.py --collection ansible_collections/lowlydba/sqlserver --collection-version 0.0.0" returned exit status 0. >>> Standard Error [WARNING]: packaging Python module unavailable; unable to validate collection Ansible version requirements >>> Standard Output {} ERROR: Command "docker exec -it ansible-test-controller-EL88bWob /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible_collections/lowlydba/sqlserver LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test sanity --containers '{"control": {"__pypi_proxy__": {"pypi-test-container-EL88bWob": {"host_ip": "172.17.0.2", "names": ["pypi-test-container-EL88bWob"], "ports": [3141]}}}, "managed": {"__pypi_proxy__": {"pypi-test-container-EL88bWob": {"host_ip": "172.17.0.2", "names": ["pypi-test-container-EL88bWob"], "ports": [3141]}}}}' --metadata tests/output/.tmp/metadata-s3jmdrvh.json --truncate 198 --color yes --host-path tests/output/.tmp/host-zpgh4g5o" returned exit status 1. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76960
https://github.com/ansible/ansible/pull/76986
699ecb83082b3b24a1a0a21cfadf19eda8891bff
0d40423f1c1c7ca2e71f30c9eca60ca60af93ff2
2022-02-06T16:59:14Z
python
2022-02-08T19:38:00Z
test/integration/targets/ansible-test/ansible_collections/ns/ps_only/plugins/module_utils/validate.psm1
closed
ansible/ansible
https://github.com/ansible/ansible
76,960
`ansible-test sanity` fails on `validate-modules` in a collection that contains no python modules and uses `meta/runtime.yml` in Ansible 2.12+
### Summary I am working with @lowlydba on an [MSSQL-focused collection](https://github.com/LowlyDBA/lowlydba.sqlserver), and we're getting the basics set up. We have hit the following error in the sanity tests for Ansible 2.12 and higher: ``` Running sanity test "validate-modules" ERROR: Command "/root/.ansible/test/venv/sanity.validate-modules/3.9/2e4ce301/bin/python /root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate-modules --format json --arg-spec plugins/modules/memory.ps1 plugins/modules/memory.py --collection ansible_collections/lowlydba/sqlserver --collection-version 0.0.0" returned exit status 0. >>> Standard Error [WARNING]: packaging Python module unavailable; unable to validate collection Ansible version requirements >>> Standard Output {} ``` This looks a lot like https://github.com/ansible/ansible/pull/76513 and https://github.com/ansible/ansible/issues/76504 , but is still happening in `devel` and `milestone` too. We have been troubleshooting it in https://github.com/LowlyDBA/lowlydba.sqlserver/issues/6 and have found the following: - Error goes away if there are no PowerShell modules present (the collection will be entirely PowerShell modules at this time) - Error goes away if `requires_ansible` is not present in `meta/runtime.yml` - Error goes away if a python module is included in the collection (a python plugin was not sufficient, tested with a filter plugin) ### Issue Type Bug Report ### Component Name ansible-test ### Ansible Version Affects `stable-2.12`, `milestone`, and `devel` branches. ### Configuration ```console $ ansible-config dump --only-changed n/a ``` ### OS / Environment WSL2 / Ubuntu 18.04 ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> With [the collection](https://github.com/LowlyDBA/lowlydba.sqlserver) checked out in the right directory structure: ```yaml (paste below) ansible-test sanity --docker default ``` ### Expected Results Clean run ### Actual Results ```console Running sanity test "validate-modules" ERROR: Command "/root/.ansible/test/venv/sanity.validate-modules/3.9/2e4ce301/bin/python /root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate-modules --format json --arg-spec plugins/modules/memory.ps1 plugins/modules/memory.py --collection ansible_collections/lowlydba/sqlserver --collection-version 0.0.0" returned exit status 0. >>> Standard Error [WARNING]: packaging Python module unavailable; unable to validate collection Ansible version requirements >>> Standard Output {} ERROR: Command "docker exec -it ansible-test-controller-EL88bWob /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible_collections/lowlydba/sqlserver LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test sanity --containers '{"control": {"__pypi_proxy__": {"pypi-test-container-EL88bWob": {"host_ip": "172.17.0.2", "names": ["pypi-test-container-EL88bWob"], "ports": [3141]}}}, "managed": {"__pypi_proxy__": {"pypi-test-container-EL88bWob": {"host_ip": "172.17.0.2", "names": ["pypi-test-container-EL88bWob"], "ports": [3141]}}}}' --metadata tests/output/.tmp/metadata-s3jmdrvh.json --truncate 198 --color yes --host-path tests/output/.tmp/host-zpgh4g5o" returned exit status 1. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76960
https://github.com/ansible/ansible/pull/76986
699ecb83082b3b24a1a0a21cfadf19eda8891bff
0d40423f1c1c7ca2e71f30c9eca60ca60af93ff2
2022-02-06T16:59:14Z
python
2022-02-08T19:38:00Z
test/integration/targets/ansible-test/ansible_collections/ns/ps_only/plugins/modules/validate.ps1
closed
ansible/ansible
https://github.com/ansible/ansible
76,960
`ansible-test sanity` fails on `validate-modules` in a collection that contains no python modules and uses `meta/runtime.yml` in Ansible 2.12+
### Summary I am working with @lowlydba on an [MSSQL-focused collection](https://github.com/LowlyDBA/lowlydba.sqlserver), and we're getting the basics set up. We have hit the following error in the sanity tests for Ansible 2.12 and higher: ``` Running sanity test "validate-modules" ERROR: Command "/root/.ansible/test/venv/sanity.validate-modules/3.9/2e4ce301/bin/python /root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate-modules --format json --arg-spec plugins/modules/memory.ps1 plugins/modules/memory.py --collection ansible_collections/lowlydba/sqlserver --collection-version 0.0.0" returned exit status 0. >>> Standard Error [WARNING]: packaging Python module unavailable; unable to validate collection Ansible version requirements >>> Standard Output {} ``` This looks a lot like https://github.com/ansible/ansible/pull/76513 and https://github.com/ansible/ansible/issues/76504 , but is still happening in `devel` and `milestone` too. We have been troubleshooting it in https://github.com/LowlyDBA/lowlydba.sqlserver/issues/6 and have found the following: - Error goes away if there are no PowerShell modules present (the collection will be entirely PowerShell modules at this time) - Error goes away if `requires_ansible` is not present in `meta/runtime.yml` - Error goes away if a python module is included in the collection (a python plugin was not sufficient, tested with a filter plugin) ### Issue Type Bug Report ### Component Name ansible-test ### Ansible Version Affects `stable-2.12`, `milestone`, and `devel` branches. ### Configuration ```console $ ansible-config dump --only-changed n/a ``` ### OS / Environment WSL2 / Ubuntu 18.04 ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> With [the collection](https://github.com/LowlyDBA/lowlydba.sqlserver) checked out in the right directory structure: ```yaml (paste below) ansible-test sanity --docker default ``` ### Expected Results Clean run ### Actual Results ```console Running sanity test "validate-modules" ERROR: Command "/root/.ansible/test/venv/sanity.validate-modules/3.9/2e4ce301/bin/python /root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate-modules --format json --arg-spec plugins/modules/memory.ps1 plugins/modules/memory.py --collection ansible_collections/lowlydba/sqlserver --collection-version 0.0.0" returned exit status 0. >>> Standard Error [WARNING]: packaging Python module unavailable; unable to validate collection Ansible version requirements >>> Standard Output {} ERROR: Command "docker exec -it ansible-test-controller-EL88bWob /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible_collections/lowlydba/sqlserver LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test sanity --containers '{"control": {"__pypi_proxy__": {"pypi-test-container-EL88bWob": {"host_ip": "172.17.0.2", "names": ["pypi-test-container-EL88bWob"], "ports": [3141]}}}, "managed": {"__pypi_proxy__": {"pypi-test-container-EL88bWob": {"host_ip": "172.17.0.2", "names": ["pypi-test-container-EL88bWob"], "ports": [3141]}}}}' --metadata tests/output/.tmp/metadata-s3jmdrvh.json --truncate 198 --color yes --host-path tests/output/.tmp/host-zpgh4g5o" returned exit status 1. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76960
https://github.com/ansible/ansible/pull/76986
699ecb83082b3b24a1a0a21cfadf19eda8891bff
0d40423f1c1c7ca2e71f30c9eca60ca60af93ff2
2022-02-06T16:59:14Z
python
2022-02-08T19:38:00Z
test/integration/targets/ansible-test/ansible_collections/ns/ps_only/plugins/modules/validate.py
closed
ansible/ansible
https://github.com/ansible/ansible
76,960
`ansible-test sanity` fails on `validate-modules` in a collection that contains no python modules and uses `meta/runtime.yml` in Ansible 2.12+
### Summary I am working with @lowlydba on an [MSSQL-focused collection](https://github.com/LowlyDBA/lowlydba.sqlserver), and we're getting the basics set up. We have hit the following error in the sanity tests for Ansible 2.12 and higher: ``` Running sanity test "validate-modules" ERROR: Command "/root/.ansible/test/venv/sanity.validate-modules/3.9/2e4ce301/bin/python /root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate-modules --format json --arg-spec plugins/modules/memory.ps1 plugins/modules/memory.py --collection ansible_collections/lowlydba/sqlserver --collection-version 0.0.0" returned exit status 0. >>> Standard Error [WARNING]: packaging Python module unavailable; unable to validate collection Ansible version requirements >>> Standard Output {} ``` This looks a lot like https://github.com/ansible/ansible/pull/76513 and https://github.com/ansible/ansible/issues/76504 , but is still happening in `devel` and `milestone` too. We have been troubleshooting it in https://github.com/LowlyDBA/lowlydba.sqlserver/issues/6 and have found the following: - Error goes away if there are no PowerShell modules present (the collection will be entirely PowerShell modules at this time) - Error goes away if `requires_ansible` is not present in `meta/runtime.yml` - Error goes away if a python module is included in the collection (a python plugin was not sufficient, tested with a filter plugin) ### Issue Type Bug Report ### Component Name ansible-test ### Ansible Version Affects `stable-2.12`, `milestone`, and `devel` branches. ### Configuration ```console $ ansible-config dump --only-changed n/a ``` ### OS / Environment WSL2 / Ubuntu 18.04 ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> With [the collection](https://github.com/LowlyDBA/lowlydba.sqlserver) checked out in the right directory structure: ```yaml (paste below) ansible-test sanity --docker default ``` ### Expected Results Clean run ### Actual Results ```console Running sanity test "validate-modules" ERROR: Command "/root/.ansible/test/venv/sanity.validate-modules/3.9/2e4ce301/bin/python /root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate-modules --format json --arg-spec plugins/modules/memory.ps1 plugins/modules/memory.py --collection ansible_collections/lowlydba/sqlserver --collection-version 0.0.0" returned exit status 0. >>> Standard Error [WARNING]: packaging Python module unavailable; unable to validate collection Ansible version requirements >>> Standard Output {} ERROR: Command "docker exec -it ansible-test-controller-EL88bWob /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible_collections/lowlydba/sqlserver LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test sanity --containers '{"control": {"__pypi_proxy__": {"pypi-test-container-EL88bWob": {"host_ip": "172.17.0.2", "names": ["pypi-test-container-EL88bWob"], "ports": [3141]}}}, "managed": {"__pypi_proxy__": {"pypi-test-container-EL88bWob": {"host_ip": "172.17.0.2", "names": ["pypi-test-container-EL88bWob"], "ports": [3141]}}}}' --metadata tests/output/.tmp/metadata-s3jmdrvh.json --truncate 198 --color yes --host-path tests/output/.tmp/host-zpgh4g5o" returned exit status 1. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76960
https://github.com/ansible/ansible/pull/76986
699ecb83082b3b24a1a0a21cfadf19eda8891bff
0d40423f1c1c7ca2e71f30c9eca60ca60af93ff2
2022-02-06T16:59:14Z
python
2022-02-08T19:38:00Z
test/integration/targets/ansible-test/collection-tests/validate-modules-collection-loader.sh
closed
ansible/ansible
https://github.com/ansible/ansible
76,960
`ansible-test sanity` fails on `validate-modules` in a collection that contains no python modules and uses `meta/runtime.yml` in Ansible 2.12+
### Summary I am working with @lowlydba on an [MSSQL-focused collection](https://github.com/LowlyDBA/lowlydba.sqlserver), and we're getting the basics set up. We have hit the following error in the sanity tests for Ansible 2.12 and higher: ``` Running sanity test "validate-modules" ERROR: Command "/root/.ansible/test/venv/sanity.validate-modules/3.9/2e4ce301/bin/python /root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate-modules --format json --arg-spec plugins/modules/memory.ps1 plugins/modules/memory.py --collection ansible_collections/lowlydba/sqlserver --collection-version 0.0.0" returned exit status 0. >>> Standard Error [WARNING]: packaging Python module unavailable; unable to validate collection Ansible version requirements >>> Standard Output {} ``` This looks a lot like https://github.com/ansible/ansible/pull/76513 and https://github.com/ansible/ansible/issues/76504 , but is still happening in `devel` and `milestone` too. We have been troubleshooting it in https://github.com/LowlyDBA/lowlydba.sqlserver/issues/6 and have found the following: - Error goes away if there are no PowerShell modules present (the collection will be entirely PowerShell modules at this time) - Error goes away if `requires_ansible` is not present in `meta/runtime.yml` - Error goes away if a python module is included in the collection (a python plugin was not sufficient, tested with a filter plugin) ### Issue Type Bug Report ### Component Name ansible-test ### Ansible Version Affects `stable-2.12`, `milestone`, and `devel` branches. ### Configuration ```console $ ansible-config dump --only-changed n/a ``` ### OS / Environment WSL2 / Ubuntu 18.04 ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> With [the collection](https://github.com/LowlyDBA/lowlydba.sqlserver) checked out in the right directory structure: ```yaml (paste below) ansible-test sanity --docker default ``` ### Expected Results Clean run ### Actual Results ```console Running sanity test "validate-modules" ERROR: Command "/root/.ansible/test/venv/sanity.validate-modules/3.9/2e4ce301/bin/python /root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate-modules --format json --arg-spec plugins/modules/memory.ps1 plugins/modules/memory.py --collection ansible_collections/lowlydba/sqlserver --collection-version 0.0.0" returned exit status 0. >>> Standard Error [WARNING]: packaging Python module unavailable; unable to validate collection Ansible version requirements >>> Standard Output {} ERROR: Command "docker exec -it ansible-test-controller-EL88bWob /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible_collections/lowlydba/sqlserver LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test sanity --containers '{"control": {"__pypi_proxy__": {"pypi-test-container-EL88bWob": {"host_ip": "172.17.0.2", "names": ["pypi-test-container-EL88bWob"], "ports": [3141]}}}, "managed": {"__pypi_proxy__": {"pypi-test-container-EL88bWob": {"host_ip": "172.17.0.2", "names": ["pypi-test-container-EL88bWob"], "ports": [3141]}}}}' --metadata tests/output/.tmp/metadata-s3jmdrvh.json --truncate 198 --color yes --host-path tests/output/.tmp/host-zpgh4g5o" returned exit status 1. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76960
https://github.com/ansible/ansible/pull/76986
699ecb83082b3b24a1a0a21cfadf19eda8891bff
0d40423f1c1c7ca2e71f30c9eca60ca60af93ff2
2022-02-06T16:59:14Z
python
2022-02-08T19:38:00Z
test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/main.py
# -*- coding: utf-8 -*- # # Copyright (C) 2015 Matt Martz <[email protected]> # Copyright (C) 2015 Rackspace US, Inc. # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. from __future__ import annotations import abc import argparse import ast import datetime import json import errno import os import re import subprocess import sys import tempfile import traceback from collections import OrderedDict from contextlib import contextmanager from ansible.module_utils.compat.version import StrictVersion, LooseVersion from fnmatch import fnmatch import yaml from ansible import __version__ as ansible_version from ansible.executor.module_common import REPLACER_WINDOWS, NEW_STYLE_PYTHON_MODULE_RE from ansible.module_utils.common._collections_compat import Mapping from ansible.module_utils.common.parameters import DEFAULT_TYPE_VALIDATORS from ansible.plugins.loader import fragment_loader from ansible.utils.collection_loader._collection_finder import _AnsibleCollectionFinder from ansible.utils.plugin_docs import REJECTLIST, add_collection_to_versions_and_dates, add_fragments, get_docstring from ansible.utils.version import SemanticVersion from ansible.module_utils.basic import to_bytes from .module_args import AnsibleModuleImportError, AnsibleModuleNotInitialized, get_argument_spec from .schema import ansible_module_kwargs_schema, doc_schema, return_schema from .utils import CaptureStd, NoArgsAnsibleModule, compare_unordered_lists, is_empty, parse_yaml, parse_isodate from voluptuous.humanize import humanize_error from ansible.module_utils.six import PY3, with_metaclass, string_types if PY3: # Because there is no ast.TryExcept in Python 3 ast module TRY_EXCEPT = ast.Try # REPLACER_WINDOWS from ansible.executor.module_common is byte # string but we need unicode for Python 3 REPLACER_WINDOWS = REPLACER_WINDOWS.decode('utf-8') else: TRY_EXCEPT = ast.TryExcept REJECTLIST_DIRS = frozenset(('.git', 'test', '.github', '.idea')) INDENT_REGEX = re.compile(r'([\t]*)') TYPE_REGEX = re.compile(r'.*(if|or)(\s+[^"\']*|\s+)(?<!_)(?<!str\()type\([^)].*') SYS_EXIT_REGEX = re.compile(r'[^#]*sys.exit\s*\(.*') NO_LOG_REGEX = re.compile(r'(?:pass(?!ive)|secret|token|key)', re.I) REJECTLIST_IMPORTS = { 'requests': { 'new_only': True, 'error': { 'code': 'use-module-utils-urls', 'msg': ('requests import found, should use ' 'ansible.module_utils.urls instead') } }, r'boto(?:\.|$)': { 'new_only': True, 'error': { 'code': 'use-boto3', 'msg': 'boto import found, new modules should use boto3' } }, } SUBPROCESS_REGEX = re.compile(r'subprocess\.Po.*') OS_CALL_REGEX = re.compile(r'os\.call.*') LOOSE_ANSIBLE_VERSION = LooseVersion('.'.join(ansible_version.split('.')[:3])) def is_potential_secret_option(option_name): if not NO_LOG_REGEX.search(option_name): return False # If this is a count, type, algorithm, timeout, filename, or name, it is probably not a secret if option_name.endswith(( '_count', '_type', '_alg', '_algorithm', '_timeout', '_name', '_comment', '_bits', '_id', '_identifier', '_period', '_file', '_filename', )): return False # 'key' also matches 'publickey', which is generally not secret if any(part in option_name for part in ( 'publickey', 'public_key', 'keyusage', 'key_usage', 'keyserver', 'key_server', 'keysize', 'key_size', 'keyservice', 'key_service', 'pub_key', 'pubkey', 'keyboard', 'secretary', )): return False return True def compare_dates(d1, d2): try: date1 = parse_isodate(d1, allow_date=True) date2 = parse_isodate(d2, allow_date=True) return date1 == date2 except ValueError: # At least one of d1 and d2 cannot be parsed. Simply compare values. return d1 == d2 class ReporterEncoder(json.JSONEncoder): def default(self, o): if isinstance(o, Exception): return str(o) return json.JSONEncoder.default(self, o) class Reporter: def __init__(self): self.files = OrderedDict() def _ensure_default_entry(self, path): try: self.files[path] except KeyError: self.files[path] = { 'errors': [], 'warnings': [], 'traces': [], 'warning_traces': [] } def _log(self, path, code, msg, level='error', line=0, column=0): self._ensure_default_entry(path) lvl_dct = self.files[path]['%ss' % level] lvl_dct.append({ 'code': code, 'msg': msg, 'line': line, 'column': column }) def error(self, *args, **kwargs): self._log(*args, level='error', **kwargs) def warning(self, *args, **kwargs): self._log(*args, level='warning', **kwargs) def trace(self, path, tracebk): self._ensure_default_entry(path) self.files[path]['traces'].append(tracebk) def warning_trace(self, path, tracebk): self._ensure_default_entry(path) self.files[path]['warning_traces'].append(tracebk) @staticmethod @contextmanager def _output_handle(output): if output != '-': handle = open(output, 'w+') else: handle = sys.stdout yield handle handle.flush() handle.close() @staticmethod def _filter_out_ok(reports): temp_reports = OrderedDict() for path, report in reports.items(): if report['errors'] or report['warnings']: temp_reports[path] = report return temp_reports def plain(self, warnings=False, output='-'): """Print out the test results in plain format output is ignored here for now """ ret = [] for path, report in Reporter._filter_out_ok(self.files).items(): traces = report['traces'][:] if warnings and report['warnings']: traces.extend(report['warning_traces']) for trace in traces: print('TRACE:') print('\n '.join((' %s' % trace).splitlines())) for error in report['errors']: error['path'] = path print('%(path)s:%(line)d:%(column)d: E%(code)s %(msg)s' % error) ret.append(1) if warnings: for warning in report['warnings']: warning['path'] = path print('%(path)s:%(line)d:%(column)d: W%(code)s %(msg)s' % warning) return 3 if ret else 0 def json(self, warnings=False, output='-'): """Print out the test results in json format warnings is not respected in this output """ ret = [len(r['errors']) for r in self.files.values()] with Reporter._output_handle(output) as handle: print(json.dumps(Reporter._filter_out_ok(self.files), indent=4, cls=ReporterEncoder), file=handle) return 3 if sum(ret) else 0 class Validator(with_metaclass(abc.ABCMeta, object)): """Validator instances are intended to be run on a single object. if you are scanning multiple objects for problems, you'll want to have a separate Validator for each one.""" def __init__(self, reporter=None): self.reporter = reporter @property @abc.abstractmethod def object_name(self): """Name of the object we validated""" pass @property @abc.abstractmethod def object_path(self): """Path of the object we validated""" pass @abc.abstractmethod def validate(self): """Run this method to generate the test results""" pass class ModuleValidator(Validator): REJECTLIST_PATTERNS = ('.git*', '*.pyc', '*.pyo', '.*', '*.md', '*.rst', '*.txt') REJECTLIST_FILES = frozenset(('.git', '.gitignore', '.travis.yml', '.gitattributes', '.gitmodules', 'COPYING', '__init__.py', 'VERSION', 'test-docs.sh')) REJECTLIST = REJECTLIST_FILES.union(REJECTLIST['MODULE']) PS_DOC_REJECTLIST = frozenset(( 'async_status.ps1', 'slurp.ps1', 'setup.ps1' )) # win_dsc is a dynamic arg spec, the docs won't ever match PS_ARG_VALIDATE_REJECTLIST = frozenset(('win_dsc.ps1', )) ACCEPTLIST_FUTURE_IMPORTS = frozenset(('absolute_import', 'division', 'print_function')) def __init__(self, path, analyze_arg_spec=False, collection=None, collection_version=None, base_branch=None, git_cache=None, reporter=None, routing=None): super(ModuleValidator, self).__init__(reporter=reporter or Reporter()) self.path = path self.basename = os.path.basename(self.path) self.name = os.path.splitext(self.basename)[0] self.analyze_arg_spec = analyze_arg_spec self._Version = LooseVersion self._StrictVersion = StrictVersion self.collection = collection self.collection_name = 'ansible.builtin' if self.collection: self._Version = SemanticVersion self._StrictVersion = SemanticVersion collection_namespace_path, collection_name = os.path.split(self.collection) self.collection_name = '%s.%s' % (os.path.basename(collection_namespace_path), collection_name) self.routing = routing self.collection_version = None if collection_version is not None: self.collection_version_str = collection_version self.collection_version = SemanticVersion(collection_version) self.base_branch = base_branch self.git_cache = git_cache or GitCache() self._python_module_override = False with open(path) as f: self.text = f.read() self.length = len(self.text.splitlines()) try: self.ast = ast.parse(self.text) except Exception: self.ast = None if base_branch: self.base_module = self._get_base_file() else: self.base_module = None def _create_version(self, v, collection_name=None): if not v: raise ValueError('Empty string is not a valid version') if collection_name == 'ansible.builtin': return LooseVersion(v) if collection_name is not None: return SemanticVersion(v) return self._Version(v) def _create_strict_version(self, v, collection_name=None): if not v: raise ValueError('Empty string is not a valid version') if collection_name == 'ansible.builtin': return StrictVersion(v) if collection_name is not None: return SemanticVersion(v) return self._StrictVersion(v) def __enter__(self): return self def __exit__(self, exc_type, exc_value, traceback): if not self.base_module: return try: os.remove(self.base_module) except Exception: pass @property def object_name(self): return self.basename @property def object_path(self): return self.path def _get_collection_meta(self): """Implement if we need this for version_added comparisons """ pass def _python_module(self): if self.path.endswith('.py') or self._python_module_override: return True return False def _powershell_module(self): if self.path.endswith('.ps1'): return True return False def _just_docs(self): """Module can contain just docs and from __future__ boilerplate """ try: for child in self.ast.body: if not isinstance(child, ast.Assign): # allowed from __future__ imports if isinstance(child, ast.ImportFrom) and child.module == '__future__': for future_import in child.names: if future_import.name not in self.ACCEPTLIST_FUTURE_IMPORTS: break else: continue return False return True except AttributeError: return False def _get_base_branch_module_path(self): """List all paths within lib/ansible/modules to try and match a moved module""" return self.git_cache.base_module_paths.get(self.object_name) def _has_alias(self): """Return true if the module has any aliases.""" return self.object_name in self.git_cache.head_aliased_modules def _get_base_file(self): # In case of module moves, look for the original location base_path = self._get_base_branch_module_path() command = ['git', 'show', '%s:%s' % (self.base_branch, base_path or self.path)] p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() if int(p.returncode) != 0: return None t = tempfile.NamedTemporaryFile(delete=False) t.write(stdout) t.close() return t.name def _is_new_module(self): if self._has_alias(): return False return not self.object_name.startswith('_') and bool(self.base_branch) and not bool(self.base_module) def _check_interpreter(self, powershell=False): if powershell: if not self.text.startswith('#!powershell\n'): self.reporter.error( path=self.object_path, code='missing-powershell-interpreter', msg='Interpreter line is not "#!powershell"' ) return missing_python_interpreter = False if not self.text.startswith('#!/usr/bin/python'): if NEW_STYLE_PYTHON_MODULE_RE.search(to_bytes(self.text)): missing_python_interpreter = self.text.startswith('#!') # shebang optional, but if present must match else: missing_python_interpreter = True # shebang required if missing_python_interpreter: self.reporter.error( path=self.object_path, code='missing-python-interpreter', msg='Interpreter line is not "#!/usr/bin/python"', ) def _check_type_instead_of_isinstance(self, powershell=False): if powershell: return for line_no, line in enumerate(self.text.splitlines()): typekeyword = TYPE_REGEX.match(line) if typekeyword: # TODO: add column self.reporter.error( path=self.object_path, code='unidiomatic-typecheck', msg=('Type comparison using type() found. ' 'Use isinstance() instead'), line=line_no + 1 ) def _check_for_sys_exit(self): # Optimize out the happy path if 'sys.exit' not in self.text: return for line_no, line in enumerate(self.text.splitlines()): sys_exit_usage = SYS_EXIT_REGEX.match(line) if sys_exit_usage: # TODO: add column self.reporter.error( path=self.object_path, code='use-fail-json-not-sys-exit', msg='sys.exit() call found. Should be exit_json/fail_json', line=line_no + 1 ) def _check_gpl3_header(self): header = '\n'.join(self.text.split('\n')[:20]) if ('GNU General Public License' not in header or ('version 3' not in header and 'v3.0' not in header)): self.reporter.error( path=self.object_path, code='missing-gplv3-license', msg='GPLv3 license header not found in the first 20 lines of the module' ) elif self._is_new_module(): if len([line for line in header if 'GNU General Public License' in line]) > 1: self.reporter.error( path=self.object_path, code='use-short-gplv3-license', msg='Found old style GPLv3 license header: ' 'https://docs.ansible.com/ansible-core/devel/dev_guide/developing_modules_documenting.html#copyright' ) def _check_for_subprocess(self): for child in self.ast.body: if isinstance(child, ast.Import): if child.names[0].name == 'subprocess': for line_no, line in enumerate(self.text.splitlines()): sp_match = SUBPROCESS_REGEX.search(line) if sp_match: self.reporter.error( path=self.object_path, code='use-run-command-not-popen', msg=('subprocess.Popen call found. Should be module.run_command'), line=(line_no + 1), column=(sp_match.span()[0] + 1) ) def _check_for_os_call(self): if 'os.call' in self.text: for line_no, line in enumerate(self.text.splitlines()): os_call_match = OS_CALL_REGEX.search(line) if os_call_match: self.reporter.error( path=self.object_path, code='use-run-command-not-os-call', msg=('os.call() call found. Should be module.run_command'), line=(line_no + 1), column=(os_call_match.span()[0] + 1) ) def _find_rejectlist_imports(self): for child in self.ast.body: names = [] if isinstance(child, ast.Import): names.extend(child.names) elif isinstance(child, TRY_EXCEPT): bodies = child.body for handler in child.handlers: bodies.extend(handler.body) for grandchild in bodies: if isinstance(grandchild, ast.Import): names.extend(grandchild.names) for name in names: # TODO: Add line/col for rejectlist_import, options in REJECTLIST_IMPORTS.items(): if re.search(rejectlist_import, name.name): new_only = options['new_only'] if self._is_new_module() and new_only: self.reporter.error( path=self.object_path, **options['error'] ) elif not new_only: self.reporter.error( path=self.object_path, **options['error'] ) def _find_module_utils(self): linenos = [] found_basic = False for child in self.ast.body: if isinstance(child, (ast.Import, ast.ImportFrom)): names = [] try: names.append(child.module) if child.module.endswith('.basic'): found_basic = True except AttributeError: pass names.extend([n.name for n in child.names]) if [n for n in names if n.startswith('ansible.module_utils')]: linenos.append(child.lineno) for name in child.names: if ('module_utils' in getattr(child, 'module', '') and isinstance(name, ast.alias) and name.name == '*'): msg = ( 'module-utils-specific-import', ('module_utils imports should import specific ' 'components, not "*"') ) if self._is_new_module(): self.reporter.error( path=self.object_path, code=msg[0], msg=msg[1], line=child.lineno ) else: self.reporter.warning( path=self.object_path, code=msg[0], msg=msg[1], line=child.lineno ) if (isinstance(name, ast.alias) and name.name == 'basic'): found_basic = True if not found_basic: self.reporter.warning( path=self.object_path, code='missing-module-utils-basic-import', msg='Did not find "ansible.module_utils.basic" import' ) return linenos def _get_first_callable(self): linenos = [] for child in self.ast.body: if isinstance(child, (ast.FunctionDef, ast.ClassDef)): linenos.append(child.lineno) return min(linenos) if linenos else None def _find_has_import(self): for child in self.ast.body: found_try_except_import = False found_has = False if isinstance(child, TRY_EXCEPT): bodies = child.body for handler in child.handlers: bodies.extend(handler.body) for grandchild in bodies: if isinstance(grandchild, ast.Import): found_try_except_import = True if isinstance(grandchild, ast.Assign): for target in grandchild.targets: if not isinstance(target, ast.Name): continue if target.id.lower().startswith('has_'): found_has = True if found_try_except_import and not found_has: # TODO: Add line/col self.reporter.warning( path=self.object_path, code='try-except-missing-has', msg='Found Try/Except block without HAS_ assignment' ) def _ensure_imports_below_docs(self, doc_info, first_callable): try: min_doc_line = min( [doc_info[key]['lineno'] for key in doc_info if doc_info[key]['lineno']] ) except ValueError: # We can't perform this validation, as there are no DOCs provided at all return max_doc_line = max( [doc_info[key]['end_lineno'] for key in doc_info if doc_info[key]['end_lineno']] ) import_lines = [] for child in self.ast.body: if isinstance(child, (ast.Import, ast.ImportFrom)): if isinstance(child, ast.ImportFrom) and child.module == '__future__': # allowed from __future__ imports for future_import in child.names: if future_import.name not in self.ACCEPTLIST_FUTURE_IMPORTS: self.reporter.error( path=self.object_path, code='illegal-future-imports', msg=('Only the following from __future__ imports are allowed: %s' % ', '.join(self.ACCEPTLIST_FUTURE_IMPORTS)), line=child.lineno ) break else: # for-else. If we didn't find a problem nad break out of the loop, then this is a legal import continue import_lines.append(child.lineno) if child.lineno < min_doc_line: self.reporter.error( path=self.object_path, code='import-before-documentation', msg=('Import found before documentation variables. ' 'All imports must appear below ' 'DOCUMENTATION/EXAMPLES/RETURN.'), line=child.lineno ) break elif isinstance(child, TRY_EXCEPT): bodies = child.body for handler in child.handlers: bodies.extend(handler.body) for grandchild in bodies: if isinstance(grandchild, (ast.Import, ast.ImportFrom)): import_lines.append(grandchild.lineno) if grandchild.lineno < min_doc_line: self.reporter.error( path=self.object_path, code='import-before-documentation', msg=('Import found before documentation ' 'variables. All imports must appear below ' 'DOCUMENTATION/EXAMPLES/RETURN.'), line=child.lineno ) break for import_line in import_lines: if not (max_doc_line < import_line < first_callable): msg = ( 'import-placement', ('Imports should be directly below DOCUMENTATION/EXAMPLES/' 'RETURN.') ) if self._is_new_module(): self.reporter.error( path=self.object_path, code=msg[0], msg=msg[1], line=import_line ) else: self.reporter.warning( path=self.object_path, code=msg[0], msg=msg[1], line=import_line ) def _validate_ps_replacers(self): # loop all (for/else + error) # get module list for each # check "shape" of each module name module_requires = r'(?im)^#\s*requires\s+\-module(?:s?)\s*(Ansible\.ModuleUtils\..+)' csharp_requires = r'(?im)^#\s*ansiblerequires\s+\-csharputil\s*(Ansible\..+)' found_requires = False for req_stmt in re.finditer(module_requires, self.text): found_requires = True # this will bomb on dictionary format - "don't do that" module_list = [x.strip() for x in req_stmt.group(1).split(',')] if len(module_list) > 1: self.reporter.error( path=self.object_path, code='multiple-utils-per-requires', msg='Ansible.ModuleUtils requirements do not support multiple modules per statement: "%s"' % req_stmt.group(0) ) continue module_name = module_list[0] if module_name.lower().endswith('.psm1'): self.reporter.error( path=self.object_path, code='invalid-requires-extension', msg='Module #Requires should not end in .psm1: "%s"' % module_name ) for req_stmt in re.finditer(csharp_requires, self.text): found_requires = True # this will bomb on dictionary format - "don't do that" module_list = [x.strip() for x in req_stmt.group(1).split(',')] if len(module_list) > 1: self.reporter.error( path=self.object_path, code='multiple-csharp-utils-per-requires', msg='Ansible C# util requirements do not support multiple utils per statement: "%s"' % req_stmt.group(0) ) continue module_name = module_list[0] if module_name.lower().endswith('.cs'): self.reporter.error( path=self.object_path, code='illegal-extension-cs', msg='Module #AnsibleRequires -CSharpUtil should not end in .cs: "%s"' % module_name ) # also accept the legacy #POWERSHELL_COMMON replacer signal if not found_requires and REPLACER_WINDOWS not in self.text: self.reporter.error( path=self.object_path, code='missing-module-utils-import-csharp-requirements', msg='No Ansible.ModuleUtils or C# Ansible util requirements/imports found' ) def _find_ps_docs_py_file(self): if self.object_name in self.PS_DOC_REJECTLIST: return py_path = self.path.replace('.ps1', '.py') if not os.path.isfile(py_path): self.reporter.error( path=self.object_path, code='missing-python-doc', msg='Missing python documentation file' ) return py_path def _get_docs(self): docs = { 'DOCUMENTATION': { 'value': None, 'lineno': 0, 'end_lineno': 0, }, 'EXAMPLES': { 'value': None, 'lineno': 0, 'end_lineno': 0, }, 'RETURN': { 'value': None, 'lineno': 0, 'end_lineno': 0, }, } for child in self.ast.body: if isinstance(child, ast.Assign): for grandchild in child.targets: if not isinstance(grandchild, ast.Name): continue if grandchild.id == 'DOCUMENTATION': docs['DOCUMENTATION']['value'] = child.value.s docs['DOCUMENTATION']['lineno'] = child.lineno docs['DOCUMENTATION']['end_lineno'] = ( child.lineno + len(child.value.s.splitlines()) ) elif grandchild.id == 'EXAMPLES': docs['EXAMPLES']['value'] = child.value.s docs['EXAMPLES']['lineno'] = child.lineno docs['EXAMPLES']['end_lineno'] = ( child.lineno + len(child.value.s.splitlines()) ) elif grandchild.id == 'RETURN': docs['RETURN']['value'] = child.value.s docs['RETURN']['lineno'] = child.lineno docs['RETURN']['end_lineno'] = ( child.lineno + len(child.value.s.splitlines()) ) return docs def _validate_docs_schema(self, doc, schema, name, error_code): # TODO: Add line/col errors = [] try: schema(doc) except Exception as e: for error in e.errors: error.data = doc errors.extend(e.errors) for error in errors: path = [str(p) for p in error.path] local_error_code = getattr(error, 'ansible_error_code', error_code) if isinstance(error.data, dict): error_message = humanize_error(error.data, error) else: error_message = error if path: combined_path = '%s.%s' % (name, '.'.join(path)) else: combined_path = name self.reporter.error( path=self.object_path, code=local_error_code, msg='%s: %s' % (combined_path, error_message) ) def _validate_docs(self): doc_info = self._get_docs() doc = None documentation_exists = False examples_exist = False returns_exist = False # We have three ways of marking deprecated/removed files. Have to check each one # individually and then make sure they all agree filename_deprecated_or_removed = False deprecated = False removed = False doc_deprecated = None # doc legally might not exist routing_says_deprecated = False if self.object_name.startswith('_') and not os.path.islink(self.object_path): filename_deprecated_or_removed = True # We are testing a collection if self.routing: routing_deprecation = self.routing.get('plugin_routing', {}).get('modules', {}).get(self.name, {}).get('deprecation', {}) if routing_deprecation: # meta/runtime.yml says this is deprecated routing_says_deprecated = True deprecated = True if not removed: if not bool(doc_info['DOCUMENTATION']['value']): self.reporter.error( path=self.object_path, code='missing-documentation', msg='No DOCUMENTATION provided' ) else: documentation_exists = True doc, errors, traces = parse_yaml( doc_info['DOCUMENTATION']['value'], doc_info['DOCUMENTATION']['lineno'], self.name, 'DOCUMENTATION' ) if doc: add_collection_to_versions_and_dates(doc, self.collection_name, is_module=True) for error in errors: self.reporter.error( path=self.object_path, code='documentation-syntax-error', **error ) for trace in traces: self.reporter.trace( path=self.object_path, tracebk=trace ) if not errors and not traces: missing_fragment = False with CaptureStd(): try: get_docstring(self.path, fragment_loader, verbose=True, collection_name=self.collection_name, is_module=True) except AssertionError: fragment = doc['extends_documentation_fragment'] self.reporter.error( path=self.object_path, code='missing-doc-fragment', msg='DOCUMENTATION fragment missing: %s' % fragment ) missing_fragment = True except Exception as e: self.reporter.trace( path=self.object_path, tracebk=traceback.format_exc() ) self.reporter.error( path=self.object_path, code='documentation-error', msg='Unknown DOCUMENTATION error, see TRACE: %s' % e ) if not missing_fragment: add_fragments(doc, self.object_path, fragment_loader=fragment_loader, is_module=True) if 'options' in doc and doc['options'] is None: self.reporter.error( path=self.object_path, code='invalid-documentation-options', msg='DOCUMENTATION.options must be a dictionary/hash when used', ) if 'deprecated' in doc and doc.get('deprecated'): doc_deprecated = True doc_deprecation = doc['deprecated'] documentation_collection = doc_deprecation.get('removed_from_collection') if documentation_collection != self.collection_name: self.reporter.error( path=self.object_path, code='deprecation-wrong-collection', msg='"DOCUMENTATION.deprecation.removed_from_collection must be the current collection name: %r vs. %r' % ( documentation_collection, self.collection_name) ) else: doc_deprecated = False if os.path.islink(self.object_path): # This module has an alias, which we can tell as it's a symlink # Rather than checking for `module: $filename` we need to check against the true filename self._validate_docs_schema( doc, doc_schema( os.readlink(self.object_path).split('.')[0], for_collection=bool(self.collection), deprecated_module=deprecated, ), 'DOCUMENTATION', 'invalid-documentation', ) else: # This is the normal case self._validate_docs_schema( doc, doc_schema( self.object_name.split('.')[0], for_collection=bool(self.collection), deprecated_module=deprecated, ), 'DOCUMENTATION', 'invalid-documentation', ) if not self.collection: existing_doc = self._check_for_new_args(doc) self._check_version_added(doc, existing_doc) if not bool(doc_info['EXAMPLES']['value']): self.reporter.error( path=self.object_path, code='missing-examples', msg='No EXAMPLES provided' ) else: _doc, errors, traces = parse_yaml(doc_info['EXAMPLES']['value'], doc_info['EXAMPLES']['lineno'], self.name, 'EXAMPLES', load_all=True, ansible_loader=True) for error in errors: self.reporter.error( path=self.object_path, code='invalid-examples', **error ) for trace in traces: self.reporter.trace( path=self.object_path, tracebk=trace ) if not bool(doc_info['RETURN']['value']): if self._is_new_module(): self.reporter.error( path=self.object_path, code='missing-return', msg='No RETURN provided' ) else: self.reporter.warning( path=self.object_path, code='missing-return-legacy', msg='No RETURN provided' ) else: data, errors, traces = parse_yaml(doc_info['RETURN']['value'], doc_info['RETURN']['lineno'], self.name, 'RETURN') if data: add_collection_to_versions_and_dates(data, self.collection_name, is_module=True, return_docs=True) self._validate_docs_schema(data, return_schema(for_collection=bool(self.collection)), 'RETURN', 'return-syntax-error') for error in errors: self.reporter.error( path=self.object_path, code='return-syntax-error', **error ) for trace in traces: self.reporter.trace( path=self.object_path, tracebk=trace ) # Check for mismatched deprecation if not self.collection: mismatched_deprecation = True if not (filename_deprecated_or_removed or removed or deprecated or doc_deprecated): mismatched_deprecation = False else: if (filename_deprecated_or_removed and doc_deprecated): mismatched_deprecation = False if (filename_deprecated_or_removed and removed and not (documentation_exists or examples_exist or returns_exist)): mismatched_deprecation = False if mismatched_deprecation: self.reporter.error( path=self.object_path, code='deprecation-mismatch', msg='Module deprecation/removed must agree in documentation, by prepending filename with' ' "_", and setting DOCUMENTATION.deprecated for deprecation or by removing all' ' documentation for removed' ) else: # We are testing a collection if self.object_name.startswith('_'): self.reporter.error( path=self.object_path, code='collections-no-underscore-on-deprecation', msg='Deprecated content in collections MUST NOT start with "_", update meta/runtime.yml instead', ) if not (doc_deprecated == routing_says_deprecated): # DOCUMENTATION.deprecated and meta/runtime.yml disagree self.reporter.error( path=self.object_path, code='deprecation-mismatch', msg='"meta/runtime.yml" and DOCUMENTATION.deprecation do not agree.' ) elif routing_says_deprecated: # Both DOCUMENTATION.deprecated and meta/runtime.yml agree that the module is deprecated. # Make sure they give the same version or date. routing_date = routing_deprecation.get('removal_date') routing_version = routing_deprecation.get('removal_version') # The versions and dates in the module documentation are auto-tagged, so remove the tag # to make comparison possible and to avoid confusing the user. documentation_date = doc_deprecation.get('removed_at_date') documentation_version = doc_deprecation.get('removed_in') if not compare_dates(routing_date, documentation_date): self.reporter.error( path=self.object_path, code='deprecation-mismatch', msg='"meta/runtime.yml" and DOCUMENTATION.deprecation do not agree on removal date: %r vs. %r' % ( routing_date, documentation_date) ) if routing_version != documentation_version: self.reporter.error( path=self.object_path, code='deprecation-mismatch', msg='"meta/runtime.yml" and DOCUMENTATION.deprecation do not agree on removal version: %r vs. %r' % ( routing_version, documentation_version) ) # In the future we should error if ANSIBLE_METADATA exists in a collection return doc_info, doc def _check_version_added(self, doc, existing_doc): version_added_raw = doc.get('version_added') try: collection_name = doc.get('version_added_collection') version_added = self._create_strict_version( str(version_added_raw or '0.0'), collection_name=collection_name) except ValueError as e: version_added = version_added_raw or '0.0' if self._is_new_module() or version_added != 'historical': # already reported during schema validation, except: if version_added == 'historical': self.reporter.error( path=self.object_path, code='module-invalid-version-added', msg='version_added is not a valid version number: %r. Error: %s' % (version_added, e) ) return if existing_doc and str(version_added_raw) != str(existing_doc.get('version_added')): self.reporter.error( path=self.object_path, code='module-incorrect-version-added', msg='version_added should be %r. Currently %r' % (existing_doc.get('version_added'), version_added_raw) ) if not self._is_new_module(): return should_be = '.'.join(ansible_version.split('.')[:2]) strict_ansible_version = self._create_strict_version(should_be, collection_name='ansible.builtin') if (version_added < strict_ansible_version or strict_ansible_version < version_added): self.reporter.error( path=self.object_path, code='module-incorrect-version-added', msg='version_added should be %r. Currently %r' % (should_be, version_added_raw) ) def _validate_ansible_module_call(self, docs): try: spec, kwargs = get_argument_spec(self.path, self.collection) except AnsibleModuleNotInitialized: self.reporter.error( path=self.object_path, code='ansible-module-not-initialized', msg="Execution of the module did not result in initialization of AnsibleModule", ) return except AnsibleModuleImportError as e: self.reporter.error( path=self.object_path, code='import-error', msg="Exception attempting to import module for argument_spec introspection, '%s'" % e ) self.reporter.trace( path=self.object_path, tracebk=traceback.format_exc() ) return schema = ansible_module_kwargs_schema(self.object_name.split('.')[0], for_collection=bool(self.collection)) self._validate_docs_schema(kwargs, schema, 'AnsibleModule', 'invalid-ansiblemodule-schema') self._validate_argument_spec(docs, spec, kwargs) def _validate_list_of_module_args(self, name, terms, spec, context): if terms is None: return if not isinstance(terms, (list, tuple)): # This is already reported by schema checking return for check in terms: if not isinstance(check, (list, tuple)): # This is already reported by schema checking continue bad_term = False for term in check: if not isinstance(term, string_types): msg = name if context: msg += " found in %s" % " -> ".join(context) msg += " must contain strings in the lists or tuples; found value %r" % (term, ) self.reporter.error( path=self.object_path, code=name + '-type', msg=msg, ) bad_term = True if bad_term: continue if len(set(check)) != len(check): msg = name if context: msg += " found in %s" % " -> ".join(context) msg += " has repeated terms" self.reporter.error( path=self.object_path, code=name + '-collision', msg=msg, ) if not set(check) <= set(spec): msg = name if context: msg += " found in %s" % " -> ".join(context) msg += " contains terms which are not part of argument_spec: %s" % ", ".join(sorted(set(check).difference(set(spec)))) self.reporter.error( path=self.object_path, code=name + '-unknown', msg=msg, ) def _validate_required_if(self, terms, spec, context, module): if terms is None: return if not isinstance(terms, (list, tuple)): # This is already reported by schema checking return for check in terms: if not isinstance(check, (list, tuple)) or len(check) not in [3, 4]: # This is already reported by schema checking continue if len(check) == 4 and not isinstance(check[3], bool): msg = "required_if" if context: msg += " found in %s" % " -> ".join(context) msg += " must have forth value omitted or of type bool; got %r" % (check[3], ) self.reporter.error( path=self.object_path, code='required_if-is_one_of-type', msg=msg, ) requirements = check[2] if not isinstance(requirements, (list, tuple)): msg = "required_if" if context: msg += " found in %s" % " -> ".join(context) msg += " must have third value (requirements) being a list or tuple; got type %r" % (requirements, ) self.reporter.error( path=self.object_path, code='required_if-requirements-type', msg=msg, ) continue bad_term = False for term in requirements: if not isinstance(term, string_types): msg = "required_if" if context: msg += " found in %s" % " -> ".join(context) msg += " must have only strings in third value (requirements); got %r" % (term, ) self.reporter.error( path=self.object_path, code='required_if-requirements-type', msg=msg, ) bad_term = True if bad_term: continue if len(set(requirements)) != len(requirements): msg = "required_if" if context: msg += " found in %s" % " -> ".join(context) msg += " has repeated terms in requirements" self.reporter.error( path=self.object_path, code='required_if-requirements-collision', msg=msg, ) if not set(requirements) <= set(spec): msg = "required_if" if context: msg += " found in %s" % " -> ".join(context) msg += " contains terms in requirements which are not part of argument_spec: %s" % ", ".join(sorted(set(requirements).difference(set(spec)))) self.reporter.error( path=self.object_path, code='required_if-requirements-unknown', msg=msg, ) key = check[0] if key not in spec: msg = "required_if" if context: msg += " found in %s" % " -> ".join(context) msg += " must have its key %s in argument_spec" % key self.reporter.error( path=self.object_path, code='required_if-unknown-key', msg=msg, ) continue if key in requirements: msg = "required_if" if context: msg += " found in %s" % " -> ".join(context) msg += " contains its key %s in requirements" % key self.reporter.error( path=self.object_path, code='required_if-key-in-requirements', msg=msg, ) value = check[1] if value is not None: _type = spec[key].get('type', 'str') if callable(_type): _type_checker = _type else: _type_checker = DEFAULT_TYPE_VALIDATORS.get(_type) try: with CaptureStd(): dummy = _type_checker(value) except (Exception, SystemExit): msg = "required_if" if context: msg += " found in %s" % " -> ".join(context) msg += " has value %r which does not fit to %s's parameter type %r" % (value, key, _type) self.reporter.error( path=self.object_path, code='required_if-value-type', msg=msg, ) def _validate_required_by(self, terms, spec, context): if terms is None: return if not isinstance(terms, Mapping): # This is already reported by schema checking return for key, value in terms.items(): if isinstance(value, string_types): value = [value] if not isinstance(value, (list, tuple)): # This is already reported by schema checking continue for term in value: if not isinstance(term, string_types): # This is already reported by schema checking continue if len(set(value)) != len(value) or key in value: msg = "required_by" if context: msg += " found in %s" % " -> ".join(context) msg += " has repeated terms" self.reporter.error( path=self.object_path, code='required_by-collision', msg=msg, ) if not set(value) <= set(spec) or key not in spec: msg = "required_by" if context: msg += " found in %s" % " -> ".join(context) msg += " contains terms which are not part of argument_spec: %s" % ", ".join(sorted(set(value).difference(set(spec)))) self.reporter.error( path=self.object_path, code='required_by-unknown', msg=msg, ) def _validate_argument_spec(self, docs, spec, kwargs, context=None, last_context_spec=None): if not self.analyze_arg_spec: return if docs is None: docs = {} if context is None: context = [] if last_context_spec is None: last_context_spec = kwargs try: if not context: add_fragments(docs, self.object_path, fragment_loader=fragment_loader, is_module=True) except Exception: # Cannot merge fragments return # Use this to access type checkers later module = NoArgsAnsibleModule({}) self._validate_list_of_module_args('mutually_exclusive', last_context_spec.get('mutually_exclusive'), spec, context) self._validate_list_of_module_args('required_together', last_context_spec.get('required_together'), spec, context) self._validate_list_of_module_args('required_one_of', last_context_spec.get('required_one_of'), spec, context) self._validate_required_if(last_context_spec.get('required_if'), spec, context, module) self._validate_required_by(last_context_spec.get('required_by'), spec, context) provider_args = set() args_from_argspec = set() deprecated_args_from_argspec = set() doc_options = docs.get('options', {}) if doc_options is None: doc_options = {} for arg, data in spec.items(): restricted_argument_names = ('message', 'syslog_facility') if arg.lower() in restricted_argument_names: msg = "Argument '%s' in argument_spec " % arg if context: msg += " found in %s" % " -> ".join(context) msg += "must not be one of %s as it is used " \ "internally by Ansible Core Engine" % (",".join(restricted_argument_names)) self.reporter.error( path=self.object_path, code='invalid-argument-name', msg=msg, ) continue if 'aliases' in data: for al in data['aliases']: if al.lower() in restricted_argument_names: msg = "Argument alias '%s' in argument_spec " % al if context: msg += " found in %s" % " -> ".join(context) msg += "must not be one of %s as it is used " \ "internally by Ansible Core Engine" % (",".join(restricted_argument_names)) self.reporter.error( path=self.object_path, code='invalid-argument-name', msg=msg, ) continue # Could this a place where secrets are leaked? # If it is type: path we know it's not a secret key as it's a file path. # If it is type: bool it is more likely a flag indicating that something is secret, than an actual secret. if all(( data.get('no_log') is None, is_potential_secret_option(arg), data.get('type') not in ("path", "bool"), data.get('choices') is None, )): msg = "Argument '%s' in argument_spec could be a secret, though doesn't have `no_log` set" % arg if context: msg += " found in %s" % " -> ".join(context) self.reporter.error( path=self.object_path, code='no-log-needed', msg=msg, ) if not isinstance(data, dict): msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " must be a dictionary/hash when used" self.reporter.error( path=self.object_path, code='invalid-argument-spec', msg=msg, ) continue removed_at_date = data.get('removed_at_date', None) if removed_at_date is not None: try: if parse_isodate(removed_at_date, allow_date=False) < datetime.date.today(): msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " has a removed_at_date '%s' before today" % removed_at_date self.reporter.error( path=self.object_path, code='deprecated-date', msg=msg, ) except ValueError: # This should only happen when removed_at_date is not in ISO format. Since schema # validation already reported this as an error, don't report it a second time. pass deprecated_aliases = data.get('deprecated_aliases', None) if deprecated_aliases is not None: for deprecated_alias in deprecated_aliases: if 'name' in deprecated_alias and 'date' in deprecated_alias: try: date = deprecated_alias['date'] if parse_isodate(date, allow_date=False) < datetime.date.today(): msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " has deprecated aliases '%s' with removal date '%s' before today" % ( deprecated_alias['name'], deprecated_alias['date']) self.reporter.error( path=self.object_path, code='deprecated-date', msg=msg, ) except ValueError: # This should only happen when deprecated_alias['date'] is not in ISO format. Since # schema validation already reported this as an error, don't report it a second # time. pass has_version = False if self.collection and self.collection_version is not None: compare_version = self.collection_version version_of_what = "this collection (%s)" % self.collection_version_str code_prefix = 'collection' has_version = True elif not self.collection: compare_version = LOOSE_ANSIBLE_VERSION version_of_what = "Ansible (%s)" % ansible_version code_prefix = 'ansible' has_version = True removed_in_version = data.get('removed_in_version', None) if removed_in_version is not None: try: collection_name = data.get('removed_from_collection') removed_in = self._create_version(str(removed_in_version), collection_name=collection_name) if has_version and collection_name == self.collection_name and compare_version >= removed_in: msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " has a deprecated removed_in_version %r," % removed_in_version msg += " i.e. the version is less than or equal to the current version of %s" % version_of_what self.reporter.error( path=self.object_path, code=code_prefix + '-deprecated-version', msg=msg, ) except ValueError as e: msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " has an invalid removed_in_version number %r: %s" % (removed_in_version, e) self.reporter.error( path=self.object_path, code='invalid-deprecated-version', msg=msg, ) except TypeError: msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " has an invalid removed_in_version number %r: " % (removed_in_version, ) msg += " error while comparing to version of %s" % version_of_what self.reporter.error( path=self.object_path, code='invalid-deprecated-version', msg=msg, ) if deprecated_aliases is not None: for deprecated_alias in deprecated_aliases: if 'name' in deprecated_alias and 'version' in deprecated_alias: try: collection_name = deprecated_alias.get('collection_name') version = self._create_version(str(deprecated_alias['version']), collection_name=collection_name) if has_version and collection_name == self.collection_name and compare_version >= version: msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " has deprecated aliases '%s' with removal in version %r," % ( deprecated_alias['name'], deprecated_alias['version']) msg += " i.e. the version is less than or equal to the current version of %s" % version_of_what self.reporter.error( path=self.object_path, code=code_prefix + '-deprecated-version', msg=msg, ) except ValueError as e: msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " has deprecated aliases '%s' with invalid removal version %r: %s" % ( deprecated_alias['name'], deprecated_alias['version'], e) self.reporter.error( path=self.object_path, code='invalid-deprecated-version', msg=msg, ) except TypeError: msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " has deprecated aliases '%s' with invalid removal version %r:" % ( deprecated_alias['name'], deprecated_alias['version']) msg += " error while comparing to version of %s" % version_of_what self.reporter.error( path=self.object_path, code='invalid-deprecated-version', msg=msg, ) aliases = data.get('aliases', []) if arg in aliases: msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " is specified as its own alias" self.reporter.error( path=self.object_path, code='parameter-alias-self', msg=msg ) if len(aliases) > len(set(aliases)): msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " has at least one alias specified multiple times in aliases" self.reporter.error( path=self.object_path, code='parameter-alias-repeated', msg=msg ) if not context and arg == 'state': bad_states = set(['list', 'info', 'get']) & set(data.get('choices', set())) for bad_state in bad_states: self.reporter.error( path=self.object_path, code='parameter-state-invalid-choice', msg="Argument 'state' includes the value '%s' as a choice" % bad_state) if not data.get('removed_in_version', None) and not data.get('removed_at_date', None): args_from_argspec.add(arg) args_from_argspec.update(aliases) else: deprecated_args_from_argspec.add(arg) deprecated_args_from_argspec.update(aliases) if arg == 'provider' and self.object_path.startswith('lib/ansible/modules/network/'): if data.get('options') is not None and not isinstance(data.get('options'), Mapping): self.reporter.error( path=self.object_path, code='invalid-argument-spec-options', msg="Argument 'options' in argument_spec['provider'] must be a dictionary/hash when used", ) elif data.get('options'): # Record provider options from network modules, for later comparison for provider_arg, provider_data in data.get('options', {}).items(): provider_args.add(provider_arg) provider_args.update(provider_data.get('aliases', [])) if data.get('required') and data.get('default', object) != object: msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " is marked as required but specifies a default. Arguments with a" \ " default should not be marked as required" self.reporter.error( path=self.object_path, code='no-default-for-required-parameter', msg=msg ) if arg in provider_args: # Provider args are being removed from network module top level # don't validate docs<->arg_spec checks below continue _type = data.get('type', 'str') if callable(_type): _type_checker = _type else: _type_checker = DEFAULT_TYPE_VALIDATORS.get(_type) _elements = data.get('elements') if (_type == 'list') and not _elements: msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " defines type as list but elements is not defined" self.reporter.error( path=self.object_path, code='parameter-list-no-elements', msg=msg ) if _elements: if not callable(_elements): DEFAULT_TYPE_VALIDATORS.get(_elements) if _type != 'list': msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " defines elements as %s but it is valid only when value of parameter type is list" % _elements self.reporter.error( path=self.object_path, code='parameter-invalid-elements', msg=msg ) arg_default = None if 'default' in data and not is_empty(data['default']): try: with CaptureStd(): arg_default = _type_checker(data['default']) except (Exception, SystemExit): msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " defines default as (%r) but this is incompatible with parameter type %r" % (data['default'], _type) self.reporter.error( path=self.object_path, code='incompatible-default-type', msg=msg ) continue doc_options_args = [] for alias in sorted(set([arg] + list(aliases))): if alias in doc_options: doc_options_args.append(alias) if len(doc_options_args) == 0: # Undocumented arguments will be handled later (search for undocumented-parameter) doc_options_arg = {} else: doc_options_arg = doc_options[doc_options_args[0]] if len(doc_options_args) > 1: msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " with aliases %s is documented multiple times, namely as %s" % ( ", ".join([("'%s'" % alias) for alias in aliases]), ", ".join([("'%s'" % alias) for alias in doc_options_args]) ) self.reporter.error( path=self.object_path, code='parameter-documented-multiple-times', msg=msg ) try: doc_default = None if 'default' in doc_options_arg and not is_empty(doc_options_arg['default']): with CaptureStd(): doc_default = _type_checker(doc_options_arg['default']) except (Exception, SystemExit): msg = "Argument '%s' in documentation" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " defines default as (%r) but this is incompatible with parameter type %r" % (doc_options_arg.get('default'), _type) self.reporter.error( path=self.object_path, code='doc-default-incompatible-type', msg=msg ) continue if arg_default != doc_default: msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " defines default as (%r) but documentation defines default as (%r)" % (arg_default, doc_default) self.reporter.error( path=self.object_path, code='doc-default-does-not-match-spec', msg=msg ) doc_type = doc_options_arg.get('type') if 'type' in data and data['type'] is not None: if doc_type is None: if not arg.startswith('_'): # hidden parameter, for example _raw_params msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " defines type as %r but documentation doesn't define type" % (data['type']) self.reporter.error( path=self.object_path, code='parameter-type-not-in-doc', msg=msg ) elif data['type'] != doc_type: msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " defines type as %r but documentation defines type as %r" % (data['type'], doc_type) self.reporter.error( path=self.object_path, code='doc-type-does-not-match-spec', msg=msg ) else: if doc_type is None: msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " uses default type ('str') but documentation doesn't define type" self.reporter.error( path=self.object_path, code='doc-missing-type', msg=msg ) elif doc_type != 'str': msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " implies type as 'str' but documentation defines as %r" % doc_type self.reporter.error( path=self.object_path, code='implied-parameter-type-mismatch', msg=msg ) doc_choices = [] try: for choice in doc_options_arg.get('choices', []): try: with CaptureStd(): doc_choices.append(_type_checker(choice)) except (Exception, SystemExit): msg = "Argument '%s' in documentation" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " defines choices as (%r) but this is incompatible with argument type %r" % (choice, _type) self.reporter.error( path=self.object_path, code='doc-choices-incompatible-type', msg=msg ) raise StopIteration() except StopIteration: continue arg_choices = [] try: for choice in data.get('choices', []): try: with CaptureStd(): arg_choices.append(_type_checker(choice)) except (Exception, SystemExit): msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " defines choices as (%r) but this is incompatible with argument type %r" % (choice, _type) self.reporter.error( path=self.object_path, code='incompatible-choices', msg=msg ) raise StopIteration() except StopIteration: continue if not compare_unordered_lists(arg_choices, doc_choices): msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " defines choices as (%r) but documentation defines choices as (%r)" % (arg_choices, doc_choices) self.reporter.error( path=self.object_path, code='doc-choices-do-not-match-spec', msg=msg ) doc_required = doc_options_arg.get('required', False) data_required = data.get('required', False) if (doc_required or data_required) and not (doc_required and data_required): msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) if doc_required: msg += " is not required, but is documented as being required" else: msg += " is required, but is not documented as being required" self.reporter.error( path=self.object_path, code='doc-required-mismatch', msg=msg ) doc_elements = doc_options_arg.get('elements', None) doc_type = doc_options_arg.get('type', 'str') data_elements = data.get('elements', None) if (doc_elements and not doc_type == 'list'): msg = "Argument '%s' " % arg if context: msg += " found in %s" % " -> ".join(context) msg += " defines parameter elements as %s but it is valid only when value of parameter type is list" % doc_elements self.reporter.error( path=self.object_path, code='doc-elements-invalid', msg=msg ) if (doc_elements or data_elements) and not (doc_elements == data_elements): msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) if data_elements: msg += " specifies elements as %s," % data_elements else: msg += " does not specify elements," if doc_elements: msg += "but elements is documented as being %s" % doc_elements else: msg += "but elements is not documented" self.reporter.error( path=self.object_path, code='doc-elements-mismatch', msg=msg ) spec_suboptions = data.get('options') doc_suboptions = doc_options_arg.get('suboptions', {}) if spec_suboptions: if not doc_suboptions: msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " has sub-options but documentation does not define it" self.reporter.error( path=self.object_path, code='missing-suboption-docs', msg=msg ) self._validate_argument_spec({'options': doc_suboptions}, spec_suboptions, kwargs, context=context + [arg], last_context_spec=data) for arg in args_from_argspec: if not str(arg).isidentifier(): msg = "Argument '%s' in argument_spec" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " is not a valid python identifier" self.reporter.error( path=self.object_path, code='parameter-invalid', msg=msg ) if docs: args_from_docs = set() for arg, data in doc_options.items(): args_from_docs.add(arg) args_from_docs.update(data.get('aliases', [])) args_missing_from_docs = args_from_argspec.difference(args_from_docs) docs_missing_from_args = args_from_docs.difference(args_from_argspec | deprecated_args_from_argspec) for arg in args_missing_from_docs: if arg in provider_args: # Provider args are being removed from network module top level # So they are likely not documented on purpose continue msg = "Argument '%s'" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " is listed in the argument_spec, but not documented in the module documentation" self.reporter.error( path=self.object_path, code='undocumented-parameter', msg=msg ) for arg in docs_missing_from_args: msg = "Argument '%s'" % arg if context: msg += " found in %s" % " -> ".join(context) msg += " is listed in DOCUMENTATION.options, but not accepted by the module argument_spec" self.reporter.error( path=self.object_path, code='nonexistent-parameter-documented', msg=msg ) def _check_for_new_args(self, doc): if not self.base_branch or self._is_new_module(): return with CaptureStd(): try: existing_doc, dummy_examples, dummy_return, existing_metadata = get_docstring( self.base_module, fragment_loader, verbose=True, collection_name=self.collection_name, is_module=True) existing_options = existing_doc.get('options', {}) or {} except AssertionError: fragment = doc['extends_documentation_fragment'] self.reporter.warning( path=self.object_path, code='missing-existing-doc-fragment', msg='Pre-existing DOCUMENTATION fragment missing: %s' % fragment ) return except Exception as e: self.reporter.warning_trace( path=self.object_path, tracebk=e ) self.reporter.warning( path=self.object_path, code='unknown-doc-fragment', msg=('Unknown pre-existing DOCUMENTATION error, see TRACE. Submodule refs may need updated') ) return try: mod_collection_name = existing_doc.get('version_added_collection') mod_version_added = self._create_strict_version( str(existing_doc.get('version_added', '0.0')), collection_name=mod_collection_name) except ValueError: mod_collection_name = self.collection_name mod_version_added = self._create_strict_version('0.0') options = doc.get('options', {}) or {} should_be = '.'.join(ansible_version.split('.')[:2]) strict_ansible_version = self._create_strict_version(should_be, collection_name='ansible.builtin') for option, details in options.items(): try: names = [option] + details.get('aliases', []) except (TypeError, AttributeError): # Reporting of this syntax error will be handled by schema validation. continue if any(name in existing_options for name in names): # The option already existed. Make sure version_added didn't change. for name in names: existing_collection_name = existing_options.get(name, {}).get('version_added_collection') existing_version = existing_options.get(name, {}).get('version_added') if existing_version: break current_collection_name = details.get('version_added_collection') current_version = details.get('version_added') if current_collection_name != existing_collection_name: self.reporter.error( path=self.object_path, code='option-incorrect-version-added-collection', msg=('version_added for existing option (%s) should ' 'belong to collection %r. Currently belongs to %r' % (option, current_collection_name, existing_collection_name)) ) elif str(current_version) != str(existing_version): self.reporter.error( path=self.object_path, code='option-incorrect-version-added', msg=('version_added for existing option (%s) should ' 'be %r. Currently %r' % (option, existing_version, current_version)) ) continue try: collection_name = details.get('version_added_collection') version_added = self._create_strict_version( str(details.get('version_added', '0.0')), collection_name=collection_name) except ValueError as e: # already reported during schema validation continue if collection_name != self.collection_name: continue if (strict_ansible_version != mod_version_added and (version_added < strict_ansible_version or strict_ansible_version < version_added)): self.reporter.error( path=self.object_path, code='option-incorrect-version-added', msg=('version_added for new option (%s) should ' 'be %r. Currently %r' % (option, should_be, version_added)) ) return existing_doc @staticmethod def is_on_rejectlist(path): base_name = os.path.basename(path) file_name = os.path.splitext(base_name)[0] if file_name.startswith('_') and os.path.islink(path): return True if not frozenset((base_name, file_name)).isdisjoint(ModuleValidator.REJECTLIST): return True for pat in ModuleValidator.REJECTLIST_PATTERNS: if fnmatch(base_name, pat): return True return False def validate(self): super(ModuleValidator, self).validate() if not self._python_module() and not self._powershell_module(): self.reporter.error( path=self.object_path, code='invalid-extension', msg=('Official Ansible modules must have a .py ' 'extension for python modules or a .ps1 ' 'for powershell modules') ) self._python_module_override = True if self._python_module() and self.ast is None: self.reporter.error( path=self.object_path, code='python-syntax-error', msg='Python SyntaxError while parsing module' ) try: compile(self.text, self.path, 'exec') except Exception: self.reporter.trace( path=self.object_path, tracebk=traceback.format_exc() ) return end_of_deprecation_should_be_removed_only = False if self._python_module(): doc_info, docs = self._validate_docs() # See if current version => deprecated.removed_in, ie, should be docs only if docs and docs.get('deprecated', False): if 'removed_in' in docs['deprecated']: removed_in = None collection_name = docs['deprecated'].get('removed_from_collection') version = docs['deprecated']['removed_in'] if collection_name != self.collection_name: self.reporter.error( path=self.object_path, code='invalid-module-deprecation-source', msg=('The deprecation version for a module must be added in this collection') ) else: try: removed_in = self._create_strict_version(str(version), collection_name=collection_name) except ValueError as e: self.reporter.error( path=self.object_path, code='invalid-module-deprecation-version', msg=('The deprecation version %r cannot be parsed: %s' % (version, e)) ) if removed_in: if not self.collection: strict_ansible_version = self._create_strict_version( '.'.join(ansible_version.split('.')[:2]), self.collection_name) end_of_deprecation_should_be_removed_only = strict_ansible_version >= removed_in if end_of_deprecation_should_be_removed_only: self.reporter.error( path=self.object_path, code='ansible-deprecated-module', msg='Module is marked for removal in version %s of Ansible when the current version is %s' % ( version, ansible_version), ) elif self.collection_version: strict_ansible_version = self.collection_version end_of_deprecation_should_be_removed_only = strict_ansible_version >= removed_in if end_of_deprecation_should_be_removed_only: self.reporter.error( path=self.object_path, code='collection-deprecated-module', msg='Module is marked for removal in version %s of this collection when the current version is %s' % ( version, self.collection_version_str), ) # handle deprecation by date if 'removed_at_date' in docs['deprecated']: try: removed_at_date = docs['deprecated']['removed_at_date'] if parse_isodate(removed_at_date, allow_date=True) < datetime.date.today(): msg = "Module's deprecated.removed_at_date date '%s' is before today" % removed_at_date self.reporter.error(path=self.object_path, code='deprecated-date', msg=msg) except ValueError: # This happens if the date cannot be parsed. This is already checked by the schema. pass if self._python_module() and not self._just_docs() and not end_of_deprecation_should_be_removed_only: self._validate_ansible_module_call(docs) self._check_for_sys_exit() self._find_rejectlist_imports() self._find_module_utils() self._find_has_import() first_callable = self._get_first_callable() or 1000000 # use a bogus "high" line number if no callable exists self._ensure_imports_below_docs(doc_info, first_callable) self._check_for_subprocess() self._check_for_os_call() if self._powershell_module(): if self.basename in self.PS_DOC_REJECTLIST: return self._validate_ps_replacers() docs_path = self._find_ps_docs_py_file() # We can only validate PowerShell arg spec if it is using the new Ansible.Basic.AnsibleModule util pattern = r'(?im)^#\s*ansiblerequires\s+\-csharputil\s*Ansible\.Basic' if re.search(pattern, self.text) and self.object_name not in self.PS_ARG_VALIDATE_REJECTLIST: with ModuleValidator(docs_path, base_branch=self.base_branch, git_cache=self.git_cache) as docs_mv: docs = docs_mv._validate_docs()[1] self._validate_ansible_module_call(docs) self._check_gpl3_header() if not self._just_docs() and not end_of_deprecation_should_be_removed_only: self._check_interpreter(powershell=self._powershell_module()) self._check_type_instead_of_isinstance( powershell=self._powershell_module() ) class PythonPackageValidator(Validator): REJECTLIST_FILES = frozenset(('__pycache__',)) def __init__(self, path, reporter=None): super(PythonPackageValidator, self).__init__(reporter=reporter or Reporter()) self.path = path self.basename = os.path.basename(path) @property def object_name(self): return self.basename @property def object_path(self): return self.path def validate(self): super(PythonPackageValidator, self).validate() if self.basename in self.REJECTLIST_FILES: return init_file = os.path.join(self.path, '__init__.py') if not os.path.exists(init_file): self.reporter.error( path=self.object_path, code='subdirectory-missing-init', msg='Ansible module subdirectories must contain an __init__.py' ) def setup_collection_loader(): collections_paths = os.environ.get('ANSIBLE_COLLECTIONS_PATH', '').split(os.pathsep) _AnsibleCollectionFinder(collections_paths) def re_compile(value): """ Argparse expects things to raise TypeError, re.compile raises an re.error exception This function is a shorthand to convert the re.error exception to a TypeError """ try: return re.compile(value) except re.error as e: raise TypeError(e) def run(): parser = argparse.ArgumentParser(prog="validate-modules") parser.add_argument('modules', nargs='+', help='Path to module or module directory') parser.add_argument('-w', '--warnings', help='Show warnings', action='store_true') parser.add_argument('--exclude', help='RegEx exclusion pattern', type=re_compile) parser.add_argument('--arg-spec', help='Analyze module argument spec', action='store_true', default=False) parser.add_argument('--base-branch', default=None, help='Used in determining if new options were added') parser.add_argument('--format', choices=['json', 'plain'], default='plain', help='Output format. Default: "%(default)s"') parser.add_argument('--output', default='-', help='Output location, use "-" for stdout. ' 'Default "%(default)s"') parser.add_argument('--collection', help='Specifies the path to the collection, when ' 'validating files within a collection. Ensure ' 'that ANSIBLE_COLLECTIONS_PATH is set so the ' 'contents of the collection can be located') parser.add_argument('--collection-version', help='The collection\'s version number used to check ' 'deprecations') args = parser.parse_args() args.modules = [m.rstrip('/') for m in args.modules] reporter = Reporter() git_cache = GitCache(args.base_branch) check_dirs = set() routing = None if args.collection: setup_collection_loader() routing_file = 'meta/runtime.yml' # Load meta/runtime.yml if it exists, as it may contain deprecation information if os.path.isfile(routing_file): try: with open(routing_file) as f: routing = yaml.safe_load(f) except yaml.error.MarkedYAMLError as ex: print('%s:%d:%d: YAML load failed: %s' % (routing_file, ex.context_mark.line + 1, ex.context_mark.column + 1, re.sub(r'\s+', ' ', str(ex)))) except Exception as ex: # pylint: disable=broad-except print('%s:%d:%d: YAML load failed: %s' % (routing_file, 0, 0, re.sub(r'\s+', ' ', str(ex)))) for module in args.modules: if os.path.isfile(module): path = module if args.exclude and args.exclude.search(path): continue if ModuleValidator.is_on_rejectlist(path): continue with ModuleValidator(path, collection=args.collection, collection_version=args.collection_version, analyze_arg_spec=args.arg_spec, base_branch=args.base_branch, git_cache=git_cache, reporter=reporter, routing=routing) as mv1: mv1.validate() check_dirs.add(os.path.dirname(path)) for root, dirs, files in os.walk(module): basedir = root[len(module) + 1:].split('/', 1)[0] if basedir in REJECTLIST_DIRS: continue for dirname in dirs: if root == module and dirname in REJECTLIST_DIRS: continue path = os.path.join(root, dirname) if args.exclude and args.exclude.search(path): continue check_dirs.add(path) for filename in files: path = os.path.join(root, filename) if args.exclude and args.exclude.search(path): continue if ModuleValidator.is_on_rejectlist(path): continue with ModuleValidator(path, collection=args.collection, collection_version=args.collection_version, analyze_arg_spec=args.arg_spec, base_branch=args.base_branch, git_cache=git_cache, reporter=reporter, routing=routing) as mv2: mv2.validate() if not args.collection: for path in sorted(check_dirs): pv = PythonPackageValidator(path, reporter=reporter) pv.validate() if args.format == 'plain': sys.exit(reporter.plain(warnings=args.warnings, output=args.output)) else: sys.exit(reporter.json(warnings=args.warnings, output=args.output)) class GitCache: def __init__(self, base_branch): self.base_branch = base_branch if self.base_branch: self.base_tree = self._git(['ls-tree', '-r', '--name-only', self.base_branch, 'lib/ansible/modules/']) else: self.base_tree = [] try: self.head_tree = self._git(['ls-tree', '-r', '--name-only', 'HEAD', 'lib/ansible/modules/']) except GitError as ex: if ex.status == 128: # fallback when there is no .git directory self.head_tree = self._get_module_files() else: raise except OSError as ex: if ex.errno == errno.ENOENT: # fallback when git is not installed self.head_tree = self._get_module_files() else: raise self.base_module_paths = dict((os.path.basename(p), p) for p in self.base_tree if os.path.splitext(p)[1] in ('.py', '.ps1')) self.base_module_paths.pop('__init__.py', None) self.head_aliased_modules = set() for path in self.head_tree: filename = os.path.basename(path) if filename.startswith('_') and filename != '__init__.py': if os.path.islink(path): self.head_aliased_modules.add(os.path.basename(os.path.realpath(path))) @staticmethod def _get_module_files(): module_files = [] for (dir_path, dir_names, file_names) in os.walk('lib/ansible/modules/'): for file_name in file_names: module_files.append(os.path.join(dir_path, file_name)) return module_files @staticmethod def _git(args): cmd = ['git'] + args p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() if p.returncode != 0: raise GitError(stderr, p.returncode) return stdout.decode('utf-8').splitlines() class GitError(Exception): def __init__(self, message, status): super(GitError, self).__init__(message) self.status = status def main(): try: run() except KeyboardInterrupt: pass
closed
ansible/ansible
https://github.com/ansible/ansible
76,124
hostname module (devel) - KeyError: 'DebianStrategy' with `use: debian`
### Summary The documentation says that `debian` is a valid use for the hostname module `use` parameter, but when attempting to use it, a `KeyError` is thrown. Likely introduced in 502270c804c33d3bc963930dc85e0f4ca359674d and possibly just needs a docs fix. ``` Traceback (most recent call last): File "/home/pi/.ansible/tmp/ansible-tmp-1634987643.6837337-31154-42761744967868/AnsiballZ_hostname.py", line 107, in <module> _ansiballz_main() File "/home/pi/.ansible/tmp/ansible-tmp-1634987643.6837337-31154-42761744967868/AnsiballZ_hostname.py", line 99, in _ansiballz_main invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS) File "/home/pi/.ansible/tmp/ansible-tmp-1634987643.6837337-31154-42761744967868/AnsiballZ_hostname.py", line 48, in invoke_module run_name='__main__', alter_sys=True) File "/usr/lib/python3.7/runpy.py", line 205, in run_module return _run_module_code(code, init_globals, run_name, mod_spec) File "/usr/lib/python3.7/runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "/usr/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmp/ansible_hostname_payload_1ou9ee4b/ansible_hostname_payload.zip/ansible/modules/hostname.py", line 983, in <module> File "/tmp/ansible_hostname_payload_1ou9ee4b/ansible_hostname_payload.zip/ansible/modules/hostname.py", line 952, in main File "/tmp/ansible_hostname_payload_1ou9ee4b/ansible_hostname_payload.zip/ansible/modules/hostname.py", line 162, in __init__ KeyError: 'DebianStrategy' ``` ### Issue Type Bug Report ### Component Name hostname ### Ansible Version ```console $ ansible --version ansible [core 2.13.0.dev0] (devel b4cbe1adcf) last updated 2021/10/23 06:13:48 (GMT -500) config file = None configured module search path = ['/nvme/gentoo/rick/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /nvme/gentoo/rick/dev/ansible/ansible/lib/ansible ansible collection location = /nvme/gentoo/rick/.ansible/collections:/usr/share/ansible/collections executable location = /nvme/gentoo/rick/dev/ansible/ansible/bin/ansible python version = 3.8.9 (default, May 27 2021, 19:05:33) [GCC 10.2.0] jinja version = 2.11.2 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment ansible devel, gentoo controller, raspberry pi OS remote node ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) - name: Set hostname hostname: use: debian name: foo ``` ### Expected Results No traceback ### Actual Results ```console Traceback as above. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76124
https://github.com/ansible/ansible/pull/76929
522f9d1050baa3bdea4f3f9df5772c9acdc96f73
b1457329731b485910515dbd1c9753205767d81c
2021-10-23T11:24:28Z
python
2022-02-09T15:26:42Z
changelogs/fragments/76124-hostname-debianstrategy.yml
closed
ansible/ansible
https://github.com/ansible/ansible
76,124
hostname module (devel) - KeyError: 'DebianStrategy' with `use: debian`
### Summary The documentation says that `debian` is a valid use for the hostname module `use` parameter, but when attempting to use it, a `KeyError` is thrown. Likely introduced in 502270c804c33d3bc963930dc85e0f4ca359674d and possibly just needs a docs fix. ``` Traceback (most recent call last): File "/home/pi/.ansible/tmp/ansible-tmp-1634987643.6837337-31154-42761744967868/AnsiballZ_hostname.py", line 107, in <module> _ansiballz_main() File "/home/pi/.ansible/tmp/ansible-tmp-1634987643.6837337-31154-42761744967868/AnsiballZ_hostname.py", line 99, in _ansiballz_main invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS) File "/home/pi/.ansible/tmp/ansible-tmp-1634987643.6837337-31154-42761744967868/AnsiballZ_hostname.py", line 48, in invoke_module run_name='__main__', alter_sys=True) File "/usr/lib/python3.7/runpy.py", line 205, in run_module return _run_module_code(code, init_globals, run_name, mod_spec) File "/usr/lib/python3.7/runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "/usr/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmp/ansible_hostname_payload_1ou9ee4b/ansible_hostname_payload.zip/ansible/modules/hostname.py", line 983, in <module> File "/tmp/ansible_hostname_payload_1ou9ee4b/ansible_hostname_payload.zip/ansible/modules/hostname.py", line 952, in main File "/tmp/ansible_hostname_payload_1ou9ee4b/ansible_hostname_payload.zip/ansible/modules/hostname.py", line 162, in __init__ KeyError: 'DebianStrategy' ``` ### Issue Type Bug Report ### Component Name hostname ### Ansible Version ```console $ ansible --version ansible [core 2.13.0.dev0] (devel b4cbe1adcf) last updated 2021/10/23 06:13:48 (GMT -500) config file = None configured module search path = ['/nvme/gentoo/rick/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /nvme/gentoo/rick/dev/ansible/ansible/lib/ansible ansible collection location = /nvme/gentoo/rick/.ansible/collections:/usr/share/ansible/collections executable location = /nvme/gentoo/rick/dev/ansible/ansible/bin/ansible python version = 3.8.9 (default, May 27 2021, 19:05:33) [GCC 10.2.0] jinja version = 2.11.2 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment ansible devel, gentoo controller, raspberry pi OS remote node ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) - name: Set hostname hostname: use: debian name: foo ``` ### Expected Results No traceback ### Actual Results ```console Traceback as above. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76124
https://github.com/ansible/ansible/pull/76929
522f9d1050baa3bdea4f3f9df5772c9acdc96f73
b1457329731b485910515dbd1c9753205767d81c
2021-10-23T11:24:28Z
python
2022-02-09T15:26:42Z
lib/ansible/modules/hostname.py
# -*- coding: utf-8 -*- # Copyright: (c) 2013, Hiroaki Nakamura <[email protected]> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type DOCUMENTATION = ''' --- module: hostname author: - Adrian Likins (@alikins) - Hideki Saito (@saito-hideki) version_added: "1.4" short_description: Manage hostname requirements: [ hostname ] description: - Set system's hostname. Supports most OSs/Distributions including those using C(systemd). - Windows, HP-UX, and AIX are not currently supported. notes: - This module does B(NOT) modify C(/etc/hosts). You need to modify it yourself using other modules such as M(ansible.builtin.template) or M(ansible.builtin.replace). - On macOS, this module uses C(scutil) to set C(HostName), C(ComputerName), and C(LocalHostName). Since C(LocalHostName) cannot contain spaces or most special characters, this module will replace characters when setting C(LocalHostName). options: name: description: - Name of the host. - If the value is a fully qualified domain name that does not resolve from the given host, this will cause the module to hang for a few seconds while waiting for the name resolution attempt to timeout. type: str required: true use: description: - Which strategy to use to update the hostname. - If not set we try to autodetect, but this can be problematic, particularly with containers as they can present misleading information. - Note that 'systemd' should be specified for RHEL/EL/CentOS 7+. Older distributions should use 'redhat'. choices: ['alpine', 'debian', 'freebsd', 'generic', 'macos', 'macosx', 'darwin', 'openbsd', 'openrc', 'redhat', 'sles', 'solaris', 'systemd'] type: str version_added: '2.9' extends_documentation_fragment: - action_common_attributes - action_common_attributes.facts attributes: check_mode: support: full diff_mode: support: full facts: support: full platform: platforms: posix ''' EXAMPLES = ''' - name: Set a hostname ansible.builtin.hostname: name: web01 - name: Set a hostname specifying strategy ansible.builtin.hostname: name: web01 use: systemd ''' import os import platform import socket import traceback from ansible.module_utils.basic import ( AnsibleModule, get_distribution, get_distribution_version, ) from ansible.module_utils.common.sys_info import get_platform_subclass from ansible.module_utils.facts.system.service_mgr import ServiceMgrFactCollector from ansible.module_utils.facts.utils import get_file_lines, get_file_content from ansible.module_utils._text import to_native, to_text from ansible.module_utils.six import PY3, text_type STRATS = { 'alpine': 'Alpine', 'debian': 'Debian', 'freebsd': 'FreeBSD', 'generic': 'Generic', 'macos': 'Darwin', 'macosx': 'Darwin', 'darwin': 'Darwin', 'openbsd': 'OpenBSD', 'openrc': 'OpenRC', 'redhat': 'RedHat', 'sles': 'SLES', 'solaris': 'Solaris', 'systemd': 'Systemd', } class UnimplementedStrategy(object): def __init__(self, module): self.module = module def update_current_and_permanent_hostname(self): self.unimplemented_error() def update_current_hostname(self): self.unimplemented_error() def update_permanent_hostname(self): self.unimplemented_error() def get_current_hostname(self): self.unimplemented_error() def set_current_hostname(self, name): self.unimplemented_error() def get_permanent_hostname(self): self.unimplemented_error() def set_permanent_hostname(self, name): self.unimplemented_error() def unimplemented_error(self): system = platform.system() distribution = get_distribution() if distribution is not None: msg_platform = '%s (%s)' % (system, distribution) else: msg_platform = system self.module.fail_json( msg='hostname module cannot be used on platform %s' % msg_platform) class Hostname(object): """ This is a generic Hostname manipulation class that is subclassed based on platform. A subclass may wish to set different strategy instance to self.strategy. All subclasses MUST define platform and distribution (which may be None). """ platform = 'Generic' distribution = None strategy_class = UnimplementedStrategy def __new__(cls, *args, **kwargs): new_cls = get_platform_subclass(Hostname) return super(cls, new_cls).__new__(new_cls) def __init__(self, module): self.module = module self.name = module.params['name'] self.use = module.params['use'] if self.use is not None: strat = globals()['%sStrategy' % STRATS[self.use]] self.strategy = strat(module) elif platform.system() == 'Linux' and ServiceMgrFactCollector.is_systemd_managed(module): # This is Linux and systemd is active self.strategy = SystemdStrategy(module) else: self.strategy = self.strategy_class(module) def update_current_and_permanent_hostname(self): return self.strategy.update_current_and_permanent_hostname() def get_current_hostname(self): return self.strategy.get_current_hostname() def set_current_hostname(self, name): self.strategy.set_current_hostname(name) def get_permanent_hostname(self): return self.strategy.get_permanent_hostname() def set_permanent_hostname(self, name): self.strategy.set_permanent_hostname(name) class BaseStrategy(object): def __init__(self, module): self.module = module self.changed = False def update_current_and_permanent_hostname(self): self.update_current_hostname() self.update_permanent_hostname() return self.changed def update_current_hostname(self): name = self.module.params['name'] current_name = self.get_current_hostname() if current_name != name: if not self.module.check_mode: self.set_current_hostname(name) self.changed = True def update_permanent_hostname(self): name = self.module.params['name'] permanent_name = self.get_permanent_hostname() if permanent_name != name: if not self.module.check_mode: self.set_permanent_hostname(name) self.changed = True def get_current_hostname(self): return self.get_permanent_hostname() def set_current_hostname(self, name): pass def get_permanent_hostname(self): raise NotImplementedError def set_permanent_hostname(self, name): raise NotImplementedError class CommandStrategy(BaseStrategy): COMMAND = 'hostname' def __init__(self, module): super(CommandStrategy, self).__init__(module) self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True) def get_current_hostname(self): cmd = [self.hostname_cmd] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) return to_native(out).strip() def set_current_hostname(self, name): cmd = [self.hostname_cmd, name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) def get_permanent_hostname(self): return 'UNKNOWN' def set_permanent_hostname(self, name): pass class FileStrategy(BaseStrategy): FILE = '/etc/hostname' def get_permanent_hostname(self): if not os.path.isfile(self.FILE): return '' try: return get_file_lines(self.FILE) except Exception as e: self.module.fail_json( msg="failed to read hostname: %s" % to_native(e), exception=traceback.format_exc()) def set_permanent_hostname(self, name): try: with open(self.FILE, 'w+') as f: f.write("%s\n" % name) except Exception as e: self.module.fail_json( msg="failed to update hostname: %s" % to_native(e), exception=traceback.format_exc()) class SLESStrategy(FileStrategy): """ This is a SLES Hostname strategy class - it edits the /etc/HOSTNAME file. """ FILE = '/etc/HOSTNAME' class RedHatStrategy(BaseStrategy): """ This is a Redhat Hostname strategy class - it edits the /etc/sysconfig/network file. """ NETWORK_FILE = '/etc/sysconfig/network' def get_permanent_hostname(self): try: for line in get_file_lines(self.NETWORK_FILE): line = to_native(line).strip() if line.startswith('HOSTNAME'): k, v = line.split('=') return v.strip() self.module.fail_json( "Unable to locate HOSTNAME entry in %s" % self.NETWORK_FILE ) except Exception as e: self.module.fail_json( msg="failed to read hostname: %s" % to_native(e), exception=traceback.format_exc()) def set_permanent_hostname(self, name): try: lines = [] found = False content = get_file_content(self.NETWORK_FILE, strip=False) or "" for line in content.splitlines(True): line = to_native(line) if line.strip().startswith('HOSTNAME'): lines.append("HOSTNAME=%s\n" % name) found = True else: lines.append(line) if not found: lines.append("HOSTNAME=%s\n" % name) with open(self.NETWORK_FILE, 'w+') as f: f.writelines(lines) except Exception as e: self.module.fail_json( msg="failed to update hostname: %s" % to_native(e), exception=traceback.format_exc()) class AlpineStrategy(FileStrategy): """ This is a Alpine Linux Hostname manipulation strategy class - it edits the /etc/hostname file then run hostname -F /etc/hostname. """ FILE = '/etc/hostname' COMMAND = 'hostname' def set_current_hostname(self, name): super(AlpineStrategy, self).set_current_hostname(name) hostname_cmd = self.module.get_bin_path(self.COMMAND, True) cmd = [hostname_cmd, '-F', self.FILE] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) class SystemdStrategy(BaseStrategy): """ This is a Systemd hostname manipulation strategy class - it uses the hostnamectl command. """ COMMAND = "hostnamectl" def __init__(self, module): super(SystemdStrategy, self).__init__(module) self.hostnamectl_cmd = self.module.get_bin_path(self.COMMAND, True) def get_current_hostname(self): cmd = [self.hostnamectl_cmd, '--transient', 'status'] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) return to_native(out).strip() def set_current_hostname(self, name): if len(name) > 64: self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name") cmd = [self.hostnamectl_cmd, '--transient', 'set-hostname', name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) def get_permanent_hostname(self): cmd = [self.hostnamectl_cmd, '--static', 'status'] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) return to_native(out).strip() def set_permanent_hostname(self, name): if len(name) > 64: self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name") cmd = [self.hostnamectl_cmd, '--pretty', 'set-hostname', name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) cmd = [self.hostnamectl_cmd, '--static', 'set-hostname', name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) class OpenRCStrategy(BaseStrategy): """ This is a Gentoo (OpenRC) Hostname manipulation strategy class - it edits the /etc/conf.d/hostname file. """ FILE = '/etc/conf.d/hostname' def get_permanent_hostname(self): if not os.path.isfile(self.FILE): return '' try: for line in get_file_lines(self.FILE): line = line.strip() if line.startswith('hostname='): return line[10:].strip('"') except Exception as e: self.module.fail_json( msg="failed to read hostname: %s" % to_native(e), exception=traceback.format_exc()) def set_permanent_hostname(self, name): try: lines = [x.strip() for x in get_file_lines(self.FILE)] for i, line in enumerate(lines): if line.startswith('hostname='): lines[i] = 'hostname="%s"' % name break with open(self.FILE, 'w') as f: f.write('\n'.join(lines) + '\n') except Exception as e: self.module.fail_json( msg="failed to update hostname: %s" % to_native(e), exception=traceback.format_exc()) class OpenBSDStrategy(FileStrategy): """ This is a OpenBSD family Hostname manipulation strategy class - it edits the /etc/myname file. """ FILE = '/etc/myname' class SolarisStrategy(BaseStrategy): """ This is a Solaris11 or later Hostname manipulation strategy class - it execute hostname command. """ COMMAND = "hostname" def __init__(self, module): super(SolarisStrategy, self).__init__(module) self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True) def set_current_hostname(self, name): cmd_option = '-t' cmd = [self.hostname_cmd, cmd_option, name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) def get_permanent_hostname(self): fmri = 'svc:/system/identity:node' pattern = 'config/nodename' cmd = '/usr/sbin/svccfg -s %s listprop -o value %s' % (fmri, pattern) rc, out, err = self.module.run_command(cmd, use_unsafe_shell=True) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) return to_native(out).strip() def set_permanent_hostname(self, name): cmd = [self.hostname_cmd, name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) class FreeBSDStrategy(BaseStrategy): """ This is a FreeBSD hostname manipulation strategy class - it edits the /etc/rc.conf.d/hostname file. """ FILE = '/etc/rc.conf.d/hostname' COMMAND = "hostname" def __init__(self, module): super(FreeBSDStrategy, self).__init__(module) self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True) def get_current_hostname(self): cmd = [self.hostname_cmd] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) return to_native(out).strip() def set_current_hostname(self, name): cmd = [self.hostname_cmd, name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) def get_permanent_hostname(self): if not os.path.isfile(self.FILE): return '' try: for line in get_file_lines(self.FILE): line = line.strip() if line.startswith('hostname='): return line[10:].strip('"') except Exception as e: self.module.fail_json( msg="failed to read hostname: %s" % to_native(e), exception=traceback.format_exc()) def set_permanent_hostname(self, name): try: if os.path.isfile(self.FILE): lines = [x.strip() for x in get_file_lines(self.FILE)] for i, line in enumerate(lines): if line.startswith('hostname='): lines[i] = 'hostname="%s"' % name break else: lines = ['hostname="%s"' % name] with open(self.FILE, 'w') as f: f.write('\n'.join(lines) + '\n') except Exception as e: self.module.fail_json( msg="failed to update hostname: %s" % to_native(e), exception=traceback.format_exc()) class DarwinStrategy(BaseStrategy): """ This is a macOS hostname manipulation strategy class. It uses /usr/sbin/scutil to set ComputerName, HostName, and LocalHostName. HostName corresponds to what most platforms consider to be hostname. It controls the name used on the command line and SSH. However, macOS also has LocalHostName and ComputerName settings. LocalHostName controls the Bonjour/ZeroConf name, used by services like AirDrop. This class implements a method, _scrub_hostname(), that mimics the transformations macOS makes on hostnames when enterened in the Sharing preference pane. It replaces spaces with dashes and removes all special characters. ComputerName is the name used for user-facing GUI services, like the System Preferences/Sharing pane and when users connect to the Mac over the network. """ def __init__(self, module): super(DarwinStrategy, self).__init__(module) self.scutil = self.module.get_bin_path('scutil', True) self.name_types = ('HostName', 'ComputerName', 'LocalHostName') self.scrubbed_name = self._scrub_hostname(self.module.params['name']) def _make_translation(self, replace_chars, replacement_chars, delete_chars): if PY3: return str.maketrans(replace_chars, replacement_chars, delete_chars) if not isinstance(replace_chars, text_type) or not isinstance(replacement_chars, text_type): raise ValueError('replace_chars and replacement_chars must both be strings') if len(replace_chars) != len(replacement_chars): raise ValueError('replacement_chars must be the same length as replace_chars') table = dict(zip((ord(c) for c in replace_chars), replacement_chars)) for char in delete_chars: table[ord(char)] = None return table def _scrub_hostname(self, name): """ LocalHostName only accepts valid DNS characters while HostName and ComputerName accept a much wider range of characters. This function aims to mimic how macOS translates a friendly name to the LocalHostName. """ # Replace all these characters with a single dash name = to_text(name) replace_chars = u'\'"~`!@#$%^&*(){}[]/=?+\\|-_ ' delete_chars = u".'" table = self._make_translation(replace_chars, u'-' * len(replace_chars), delete_chars) name = name.translate(table) # Replace multiple dashes with a single dash while '-' * 2 in name: name = name.replace('-' * 2, '') name = name.rstrip('-') return name def get_current_hostname(self): cmd = [self.scutil, '--get', 'HostName'] rc, out, err = self.module.run_command(cmd) if rc != 0 and 'HostName: not set' not in err: self.module.fail_json(msg="Failed to get current hostname rc=%d, out=%s, err=%s" % (rc, out, err)) return to_native(out).strip() def get_permanent_hostname(self): cmd = [self.scutil, '--get', 'ComputerName'] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Failed to get permanent hostname rc=%d, out=%s, err=%s" % (rc, out, err)) return to_native(out).strip() def set_permanent_hostname(self, name): for hostname_type in self.name_types: cmd = [self.scutil, '--set', hostname_type] if hostname_type == 'LocalHostName': cmd.append(to_native(self.scrubbed_name)) else: cmd.append(to_native(name)) rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Failed to set {3} to '{2}': {0} {1}".format(to_native(out), to_native(err), to_native(name), hostname_type)) def set_current_hostname(self, name): pass def update_current_hostname(self): pass def update_permanent_hostname(self): name = self.module.params['name'] # Get all the current host name values in the order of self.name_types all_names = tuple(self.module.run_command([self.scutil, '--get', name_type])[1].strip() for name_type in self.name_types) # Get the expected host name values based on the order in self.name_types expected_names = tuple(self.scrubbed_name if n == 'LocalHostName' else name for n in self.name_types) # Ensure all three names are updated if all_names != expected_names: if not self.module.check_mode: self.set_permanent_hostname(name) self.changed = True class SLESHostname(Hostname): platform = 'Linux' distribution = 'Sles' try: distribution_version = get_distribution_version() # cast to float may raise ValueError on non SLES, we use float for a little more safety over int if distribution_version and 10 <= float(distribution_version) <= 12: strategy_class = SLESStrategy else: raise ValueError() except ValueError: strategy_class = UnimplementedStrategy class RHELHostname(Hostname): platform = 'Linux' distribution = 'Redhat' strategy_class = RedHatStrategy class CentOSHostname(Hostname): platform = 'Linux' distribution = 'Centos' strategy_class = RedHatStrategy class AnolisOSHostname(Hostname): platform = 'Linux' distribution = 'Anolis' strategy_class = RedHatStrategy class CloudlinuxserverHostname(Hostname): platform = 'Linux' distribution = 'Cloudlinuxserver' strategy_class = RedHatStrategy class CloudlinuxHostname(Hostname): platform = 'Linux' distribution = 'Cloudlinux' strategy_class = RedHatStrategy class AlinuxHostname(Hostname): platform = 'Linux' distribution = 'Alinux' strategy_class = RedHatStrategy class ScientificHostname(Hostname): platform = 'Linux' distribution = 'Scientific' strategy_class = RedHatStrategy class OracleLinuxHostname(Hostname): platform = 'Linux' distribution = 'Oracle' strategy_class = RedHatStrategy class VirtuozzoLinuxHostname(Hostname): platform = 'Linux' distribution = 'Virtuozzo' strategy_class = RedHatStrategy class AmazonLinuxHostname(Hostname): platform = 'Linux' distribution = 'Amazon' strategy_class = RedHatStrategy class DebianHostname(Hostname): platform = 'Linux' distribution = 'Debian' strategy_class = FileStrategy class KylinHostname(Hostname): platform = 'Linux' distribution = 'Kylin' strategy_class = FileStrategy class CumulusHostname(Hostname): platform = 'Linux' distribution = 'Cumulus-linux' strategy_class = FileStrategy class KaliHostname(Hostname): platform = 'Linux' distribution = 'Kali' strategy_class = FileStrategy class ParrotHostname(Hostname): platform = 'Linux' distribution = 'Parrot' strategy_class = FileStrategy class UbuntuHostname(Hostname): platform = 'Linux' distribution = 'Ubuntu' strategy_class = FileStrategy class LinuxmintHostname(Hostname): platform = 'Linux' distribution = 'Linuxmint' strategy_class = FileStrategy class LinaroHostname(Hostname): platform = 'Linux' distribution = 'Linaro' strategy_class = FileStrategy class DevuanHostname(Hostname): platform = 'Linux' distribution = 'Devuan' strategy_class = FileStrategy class RaspbianHostname(Hostname): platform = 'Linux' distribution = 'Raspbian' strategy_class = FileStrategy class GentooHostname(Hostname): platform = 'Linux' distribution = 'Gentoo' strategy_class = OpenRCStrategy class ALTLinuxHostname(Hostname): platform = 'Linux' distribution = 'Altlinux' strategy_class = RedHatStrategy class AlpineLinuxHostname(Hostname): platform = 'Linux' distribution = 'Alpine' strategy_class = AlpineStrategy class OpenBSDHostname(Hostname): platform = 'OpenBSD' distribution = None strategy_class = OpenBSDStrategy class SolarisHostname(Hostname): platform = 'SunOS' distribution = None strategy_class = SolarisStrategy class FreeBSDHostname(Hostname): platform = 'FreeBSD' distribution = None strategy_class = FreeBSDStrategy class NetBSDHostname(Hostname): platform = 'NetBSD' distribution = None strategy_class = FreeBSDStrategy class NeonHostname(Hostname): platform = 'Linux' distribution = 'Neon' strategy_class = FileStrategy class DarwinHostname(Hostname): platform = 'Darwin' distribution = None strategy_class = DarwinStrategy class VoidLinuxHostname(Hostname): platform = 'Linux' distribution = 'Void' strategy_class = FileStrategy class PopHostname(Hostname): platform = 'Linux' distribution = 'Pop' strategy_class = FileStrategy class EurolinuxHostname(Hostname): platform = 'Linux' distribution = 'Eurolinux' strategy_class = RedHatStrategy def main(): module = AnsibleModule( argument_spec=dict( name=dict(type='str', required=True), use=dict(type='str', choices=STRATS.keys()) ), supports_check_mode=True, ) hostname = Hostname(module) name = module.params['name'] current_hostname = hostname.get_current_hostname() permanent_hostname = hostname.get_permanent_hostname() changed = hostname.update_current_and_permanent_hostname() if name != current_hostname: name_before = current_hostname elif name != permanent_hostname: name_before = permanent_hostname else: name_before = permanent_hostname # NOTE: socket.getfqdn() calls gethostbyaddr(socket.gethostname()), which can be # slow to return if the name does not resolve correctly. kw = dict(changed=changed, name=name, ansible_facts=dict(ansible_hostname=name.split('.')[0], ansible_nodename=name, ansible_fqdn=socket.getfqdn(), ansible_domain='.'.join(socket.getfqdn().split('.')[1:]))) if changed: kw['diff'] = {'after': 'hostname = ' + name + '\n', 'before': 'hostname = ' + name_before + '\n'} module.exit_json(**kw) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
76,124
hostname module (devel) - KeyError: 'DebianStrategy' with `use: debian`
### Summary The documentation says that `debian` is a valid use for the hostname module `use` parameter, but when attempting to use it, a `KeyError` is thrown. Likely introduced in 502270c804c33d3bc963930dc85e0f4ca359674d and possibly just needs a docs fix. ``` Traceback (most recent call last): File "/home/pi/.ansible/tmp/ansible-tmp-1634987643.6837337-31154-42761744967868/AnsiballZ_hostname.py", line 107, in <module> _ansiballz_main() File "/home/pi/.ansible/tmp/ansible-tmp-1634987643.6837337-31154-42761744967868/AnsiballZ_hostname.py", line 99, in _ansiballz_main invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS) File "/home/pi/.ansible/tmp/ansible-tmp-1634987643.6837337-31154-42761744967868/AnsiballZ_hostname.py", line 48, in invoke_module run_name='__main__', alter_sys=True) File "/usr/lib/python3.7/runpy.py", line 205, in run_module return _run_module_code(code, init_globals, run_name, mod_spec) File "/usr/lib/python3.7/runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "/usr/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmp/ansible_hostname_payload_1ou9ee4b/ansible_hostname_payload.zip/ansible/modules/hostname.py", line 983, in <module> File "/tmp/ansible_hostname_payload_1ou9ee4b/ansible_hostname_payload.zip/ansible/modules/hostname.py", line 952, in main File "/tmp/ansible_hostname_payload_1ou9ee4b/ansible_hostname_payload.zip/ansible/modules/hostname.py", line 162, in __init__ KeyError: 'DebianStrategy' ``` ### Issue Type Bug Report ### Component Name hostname ### Ansible Version ```console $ ansible --version ansible [core 2.13.0.dev0] (devel b4cbe1adcf) last updated 2021/10/23 06:13:48 (GMT -500) config file = None configured module search path = ['/nvme/gentoo/rick/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /nvme/gentoo/rick/dev/ansible/ansible/lib/ansible ansible collection location = /nvme/gentoo/rick/.ansible/collections:/usr/share/ansible/collections executable location = /nvme/gentoo/rick/dev/ansible/ansible/bin/ansible python version = 3.8.9 (default, May 27 2021, 19:05:33) [GCC 10.2.0] jinja version = 2.11.2 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment ansible devel, gentoo controller, raspberry pi OS remote node ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) - name: Set hostname hostname: use: debian name: foo ``` ### Expected Results No traceback ### Actual Results ```console Traceback as above. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76124
https://github.com/ansible/ansible/pull/76929
522f9d1050baa3bdea4f3f9df5772c9acdc96f73
b1457329731b485910515dbd1c9753205767d81c
2021-10-23T11:24:28Z
python
2022-02-09T15:26:42Z
test/integration/targets/hostname/tasks/Debian.yml
closed
ansible/ansible
https://github.com/ansible/ansible
76,124
hostname module (devel) - KeyError: 'DebianStrategy' with `use: debian`
### Summary The documentation says that `debian` is a valid use for the hostname module `use` parameter, but when attempting to use it, a `KeyError` is thrown. Likely introduced in 502270c804c33d3bc963930dc85e0f4ca359674d and possibly just needs a docs fix. ``` Traceback (most recent call last): File "/home/pi/.ansible/tmp/ansible-tmp-1634987643.6837337-31154-42761744967868/AnsiballZ_hostname.py", line 107, in <module> _ansiballz_main() File "/home/pi/.ansible/tmp/ansible-tmp-1634987643.6837337-31154-42761744967868/AnsiballZ_hostname.py", line 99, in _ansiballz_main invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS) File "/home/pi/.ansible/tmp/ansible-tmp-1634987643.6837337-31154-42761744967868/AnsiballZ_hostname.py", line 48, in invoke_module run_name='__main__', alter_sys=True) File "/usr/lib/python3.7/runpy.py", line 205, in run_module return _run_module_code(code, init_globals, run_name, mod_spec) File "/usr/lib/python3.7/runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "/usr/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmp/ansible_hostname_payload_1ou9ee4b/ansible_hostname_payload.zip/ansible/modules/hostname.py", line 983, in <module> File "/tmp/ansible_hostname_payload_1ou9ee4b/ansible_hostname_payload.zip/ansible/modules/hostname.py", line 952, in main File "/tmp/ansible_hostname_payload_1ou9ee4b/ansible_hostname_payload.zip/ansible/modules/hostname.py", line 162, in __init__ KeyError: 'DebianStrategy' ``` ### Issue Type Bug Report ### Component Name hostname ### Ansible Version ```console $ ansible --version ansible [core 2.13.0.dev0] (devel b4cbe1adcf) last updated 2021/10/23 06:13:48 (GMT -500) config file = None configured module search path = ['/nvme/gentoo/rick/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /nvme/gentoo/rick/dev/ansible/ansible/lib/ansible ansible collection location = /nvme/gentoo/rick/.ansible/collections:/usr/share/ansible/collections executable location = /nvme/gentoo/rick/dev/ansible/ansible/bin/ansible python version = 3.8.9 (default, May 27 2021, 19:05:33) [GCC 10.2.0] jinja version = 2.11.2 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment ansible devel, gentoo controller, raspberry pi OS remote node ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) - name: Set hostname hostname: use: debian name: foo ``` ### Expected Results No traceback ### Actual Results ```console Traceback as above. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76124
https://github.com/ansible/ansible/pull/76929
522f9d1050baa3bdea4f3f9df5772c9acdc96f73
b1457329731b485910515dbd1c9753205767d81c
2021-10-23T11:24:28Z
python
2022-02-09T15:26:42Z
test/integration/targets/hostname/tasks/main.yml
# Setting the hostname in our test containers doesn't work currently - when: ansible_facts.virtualization_type not in ('docker', 'container', 'containerd') block: - name: Include distribution specific variables include_vars: "{{ lookup('first_found', params) }}" vars: params: files: - "{{ ansible_facts.distribution }}.yml" - "{{ ansible_facts.os_family }}.yml" - default.yml paths: - "{{ role_path }}/vars" - name: Get current hostname command: hostname register: original - import_tasks: test_check_mode.yml - import_tasks: test_normal.yml - name: Include distribution specific tasks include_tasks: file: "{{ lookup('first_found', files) }}" vars: files: - "{{ ansible_facts.distribution }}.yml" - default.yml always: # Reset back to original hostname - name: Move back original file if it existed command: mv -f {{ _hostname_file }}.orig {{ _hostname_file }} when: hn_stat.stat.exists | default(False) - name: Delete the file if it never existed file: path: "{{ _hostname_file }}" state: absent when: not hn_stat.stat.exists | default(True) - name: Reset back to original hostname hostname: name: "{{ original.stdout }}" register: revert - name: Ensure original hostname was reset assert: that: - revert is changed
closed
ansible/ansible
https://github.com/ansible/ansible
76,124
hostname module (devel) - KeyError: 'DebianStrategy' with `use: debian`
### Summary The documentation says that `debian` is a valid use for the hostname module `use` parameter, but when attempting to use it, a `KeyError` is thrown. Likely introduced in 502270c804c33d3bc963930dc85e0f4ca359674d and possibly just needs a docs fix. ``` Traceback (most recent call last): File "/home/pi/.ansible/tmp/ansible-tmp-1634987643.6837337-31154-42761744967868/AnsiballZ_hostname.py", line 107, in <module> _ansiballz_main() File "/home/pi/.ansible/tmp/ansible-tmp-1634987643.6837337-31154-42761744967868/AnsiballZ_hostname.py", line 99, in _ansiballz_main invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS) File "/home/pi/.ansible/tmp/ansible-tmp-1634987643.6837337-31154-42761744967868/AnsiballZ_hostname.py", line 48, in invoke_module run_name='__main__', alter_sys=True) File "/usr/lib/python3.7/runpy.py", line 205, in run_module return _run_module_code(code, init_globals, run_name, mod_spec) File "/usr/lib/python3.7/runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "/usr/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmp/ansible_hostname_payload_1ou9ee4b/ansible_hostname_payload.zip/ansible/modules/hostname.py", line 983, in <module> File "/tmp/ansible_hostname_payload_1ou9ee4b/ansible_hostname_payload.zip/ansible/modules/hostname.py", line 952, in main File "/tmp/ansible_hostname_payload_1ou9ee4b/ansible_hostname_payload.zip/ansible/modules/hostname.py", line 162, in __init__ KeyError: 'DebianStrategy' ``` ### Issue Type Bug Report ### Component Name hostname ### Ansible Version ```console $ ansible --version ansible [core 2.13.0.dev0] (devel b4cbe1adcf) last updated 2021/10/23 06:13:48 (GMT -500) config file = None configured module search path = ['/nvme/gentoo/rick/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /nvme/gentoo/rick/dev/ansible/ansible/lib/ansible ansible collection location = /nvme/gentoo/rick/.ansible/collections:/usr/share/ansible/collections executable location = /nvme/gentoo/rick/dev/ansible/ansible/bin/ansible python version = 3.8.9 (default, May 27 2021, 19:05:33) [GCC 10.2.0] jinja version = 2.11.2 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment ansible devel, gentoo controller, raspberry pi OS remote node ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) - name: Set hostname hostname: use: debian name: foo ``` ### Expected Results No traceback ### Actual Results ```console Traceback as above. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76124
https://github.com/ansible/ansible/pull/76929
522f9d1050baa3bdea4f3f9df5772c9acdc96f73
b1457329731b485910515dbd1c9753205767d81c
2021-10-23T11:24:28Z
python
2022-02-09T15:26:42Z
test/integration/targets/hostname/tasks/test_normal.yml
- name: Run hostname module for real now hostname: name: crocodile.ansible.test.doesthiswork.net.example.com register: hn2 - name: Get hostname command: hostname register: current_after_hn2 - name: Run hostname again to ensure it does not change hostname: name: crocodile.ansible.test.doesthiswork.net.example.com register: hn3 - name: Get hostname command: hostname register: current_after_hn3 - assert: that: - hn2 is changed - hn3 is not changed - current_after_hn2.stdout == 'crocodile.ansible.test.doesthiswork.net.example.com' - current_after_hn2.stdout == current_after_hn2.stdout
closed
ansible/ansible
https://github.com/ansible/ansible
76,124
hostname module (devel) - KeyError: 'DebianStrategy' with `use: debian`
### Summary The documentation says that `debian` is a valid use for the hostname module `use` parameter, but when attempting to use it, a `KeyError` is thrown. Likely introduced in 502270c804c33d3bc963930dc85e0f4ca359674d and possibly just needs a docs fix. ``` Traceback (most recent call last): File "/home/pi/.ansible/tmp/ansible-tmp-1634987643.6837337-31154-42761744967868/AnsiballZ_hostname.py", line 107, in <module> _ansiballz_main() File "/home/pi/.ansible/tmp/ansible-tmp-1634987643.6837337-31154-42761744967868/AnsiballZ_hostname.py", line 99, in _ansiballz_main invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS) File "/home/pi/.ansible/tmp/ansible-tmp-1634987643.6837337-31154-42761744967868/AnsiballZ_hostname.py", line 48, in invoke_module run_name='__main__', alter_sys=True) File "/usr/lib/python3.7/runpy.py", line 205, in run_module return _run_module_code(code, init_globals, run_name, mod_spec) File "/usr/lib/python3.7/runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "/usr/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmp/ansible_hostname_payload_1ou9ee4b/ansible_hostname_payload.zip/ansible/modules/hostname.py", line 983, in <module> File "/tmp/ansible_hostname_payload_1ou9ee4b/ansible_hostname_payload.zip/ansible/modules/hostname.py", line 952, in main File "/tmp/ansible_hostname_payload_1ou9ee4b/ansible_hostname_payload.zip/ansible/modules/hostname.py", line 162, in __init__ KeyError: 'DebianStrategy' ``` ### Issue Type Bug Report ### Component Name hostname ### Ansible Version ```console $ ansible --version ansible [core 2.13.0.dev0] (devel b4cbe1adcf) last updated 2021/10/23 06:13:48 (GMT -500) config file = None configured module search path = ['/nvme/gentoo/rick/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /nvme/gentoo/rick/dev/ansible/ansible/lib/ansible ansible collection location = /nvme/gentoo/rick/.ansible/collections:/usr/share/ansible/collections executable location = /nvme/gentoo/rick/dev/ansible/ansible/bin/ansible python version = 3.8.9 (default, May 27 2021, 19:05:33) [GCC 10.2.0] jinja version = 2.11.2 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment ansible devel, gentoo controller, raspberry pi OS remote node ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) - name: Set hostname hostname: use: debian name: foo ``` ### Expected Results No traceback ### Actual Results ```console Traceback as above. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76124
https://github.com/ansible/ansible/pull/76929
522f9d1050baa3bdea4f3f9df5772c9acdc96f73
b1457329731b485910515dbd1c9753205767d81c
2021-10-23T11:24:28Z
python
2022-02-09T15:26:42Z
test/units/modules/test_hostname.py
from __future__ import (absolute_import, division, print_function) __metaclass__ = type import os import shutil import tempfile from units.compat.mock import patch, MagicMock, mock_open from ansible.module_utils import basic from ansible.module_utils.common._utils import get_all_subclasses from ansible.modules import hostname from units.modules.utils import ModuleTestCase, set_module_args from ansible.module_utils.six import PY2 class TestHostname(ModuleTestCase): @patch('os.path.isfile') def test_stategy_get_never_writes_in_check_mode(self, isfile): isfile.return_value = True set_module_args({'name': 'fooname', '_ansible_check_mode': True}) subclasses = get_all_subclasses(hostname.BaseStrategy) module = MagicMock() for cls in subclasses: instance = cls(module) instance.module.run_command = MagicMock() instance.module.run_command.return_value = (0, '', '') m = mock_open() builtins = 'builtins' if PY2: builtins = '__builtin__' with patch('%s.open' % builtins, m): instance.get_permanent_hostname() instance.get_current_hostname() self.assertFalse( m.return_value.write.called, msg='%s called write, should not have' % str(cls)) class TestRedhatStrategy(ModuleTestCase): def setUp(self): super(TestRedhatStrategy, self).setUp() self.testdir = tempfile.mkdtemp(prefix='ansible-test-hostname-') self.network_file = os.path.join(self.testdir, "network") def tearDown(self): super(TestRedhatStrategy, self).tearDown() shutil.rmtree(self.testdir, ignore_errors=True) @property def instance(self): self.module = MagicMock() instance = hostname.RedHatStrategy(self.module) instance.NETWORK_FILE = self.network_file return instance def test_get_permanent_hostname_missing(self): self.assertIsNone(self.instance.get_permanent_hostname()) self.assertTrue(self.module.fail_json.called) self.module.fail_json.assert_called_with( "Unable to locate HOSTNAME entry in %s" % self.network_file ) def test_get_permanent_hostname_line_missing(self): with open(self.network_file, "w") as f: f.write("# some other content\n") self.assertIsNone(self.instance.get_permanent_hostname()) self.module.fail_json.assert_called_with( "Unable to locate HOSTNAME entry in %s" % self.network_file ) def test_get_permanent_hostname_existing(self): with open(self.network_file, "w") as f: f.write( "some other content\n" "HOSTNAME=foobar\n" "more content\n" ) self.assertEqual(self.instance.get_permanent_hostname(), "foobar") def test_get_permanent_hostname_existing_whitespace(self): with open(self.network_file, "w") as f: f.write( "some other content\n" " HOSTNAME=foobar \n" "more content\n" ) self.assertEqual(self.instance.get_permanent_hostname(), "foobar") def test_set_permanent_hostname_missing(self): self.instance.set_permanent_hostname("foobar") with open(self.network_file) as f: self.assertEqual(f.read(), "HOSTNAME=foobar\n") def test_set_permanent_hostname_line_missing(self): with open(self.network_file, "w") as f: f.write("# some other content\n") self.instance.set_permanent_hostname("foobar") with open(self.network_file) as f: self.assertEqual(f.read(), "# some other content\nHOSTNAME=foobar\n") def test_set_permanent_hostname_existing(self): with open(self.network_file, "w") as f: f.write( "some other content\n" "HOSTNAME=spam\n" "more content\n" ) self.instance.set_permanent_hostname("foobar") with open(self.network_file) as f: self.assertEqual( f.read(), "some other content\n" "HOSTNAME=foobar\n" "more content\n" ) def test_set_permanent_hostname_existing_whitespace(self): with open(self.network_file, "w") as f: f.write( "some other content\n" " HOSTNAME=spam \n" "more content\n" ) self.instance.set_permanent_hostname("foobar") with open(self.network_file) as f: self.assertEqual( f.read(), "some other content\n" "HOSTNAME=foobar\n" "more content\n" )
closed
ansible/ansible
https://github.com/ansible/ansible
68,684
Cannot set hostname on ansible 2.9.6 with use=debian
##### SUMMARY Cannot set hostname on 2.9.6 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME [hostname](https://docs.ansible.com/ansible/latest/modules/hostname_module.html) ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` $ ansible --version ansible 2.9.6 config file = /home/porn/test/ansible/ansible.cfg configured module search path = ['/home/porn/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/porn/.local/lib/python3.6/site-packages/ansible executable location = /home/porn/.local/bin/ansible python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ``` $ ansible-config dump --only-changed ANSIBLE_PIPELINING(/home/porn/test/ansible/ansible.cfg) = True ANSIBLE_SSH_CONTROL_PATH(/home/porn/test/ansible/ansible.cfg) = /tmp/%%h-%%p-%%r DEFAULT_CALLBACK_WHITELIST(/home/porn/test/ansible/ansible.cfg) = ['profile_tasks'] DEFAULT_GATHERING(/home/porn/test/ansible/ansible.cfg) = smart DEFAULT_HOST_LIST(/home/porn/test/ansible/ansible.cfg) = ['/home/porn/test/ansible/hosts'] DEFAULT_REMOTE_PORT(/home/porn/test/ansible/ansible.cfg) = 29010 DEFAULT_REMOTE_USER(/home/porn/test/ansible/ansible.cfg) = ubuntu HOST_KEY_CHECKING(/home/porn/test/ansible/ansible.cfg) = False INTERPRETER_PYTHON(/home/porn/test/ansible/ansible.cfg) = /usr/bin/python3 ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ``` $ uname -a Linux pornmachine 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.4 LTS Release: 18.04 Codename: bionic ``` ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> try to set hostname on another ubuntu 18.04 <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Base setup hosts: workers tasks: - hostname: name: "worker" use: debian become: yes ``` ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> The hostname should be set ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> The command fails <!--- Paste verbatim command output between quotes --> ``` TASK [hostname] ***************************************************************************************************************************************************************************** task path: /home/porn/test/ansible/worker.yml:5 Friday 03 April 2020 20:27:27 +0200 (0:00:01.969) 0:00:02.022 ********** Using module file /home/porn/.local/lib/python3.6/site-packages/ansible/modules/system/hostname.py Pipelining is enabled. <35.183.134.118> ESTABLISH SSH CONNECTION FOR USER: ubuntu <35.183.134.118> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=29010 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o ControlPath=/tmp/%h-%p-%r 35.183.134.118 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-qzpktirbekasjjznnyfijducgvmqtgzq ; /usr/bin/python3'"'"'"'"'"'"'"'"' && sleep 0'"'"'' Escalation succeeded <35.183.134.118> (1, b'\n{"msg": "Command failed rc=1, out=, err=Unknown operation worker\\n", "failed": true, "invocation": {"module_args": {"name": "worker", "use": "debian"}}}\n', b'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /home/porn/.ssh/config\r\ndebug1: /home/porn/.ssh/config line 15: Applying options for 35.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 11218\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\n') <35.183.134.118> Failed to connect to the host via ssh: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017 debug1: Reading configuration data /home/porn/.ssh/config debug1: /home/porn/.ssh/config line 15: Applying options for 35.* debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: auto-mux: Trying existing master debug2: fd 3 setting O_NONBLOCK debug2: mux_client_hello_exchange: master version 4 debug3: mux_client_forwards: request forwardings: 0 local, 0 remote debug3: mux_client_request_session: entering debug3: mux_client_request_alive: entering debug3: mux_client_request_alive: done pid = 11218 debug3: mux_client_request_session: session request sent debug1: mux_client_request_session: master session id: 2 debug3: mux_client_read_packet: read header failed: Broken pipe debug2: Received exit status from master 1 fatal: [35.183.134.118]: FAILED! => { "changed": false, "invocation": { "module_args": { "name": "worker", "use": "debian" } }, "msg": "Command failed rc=1, out=, err=Unknown operation worker\n" } ``` The only workaround is ansible downgrade to 2.9.5 :(
https://github.com/ansible/ansible/issues/68684
https://github.com/ansible/ansible/pull/76929
522f9d1050baa3bdea4f3f9df5772c9acdc96f73
b1457329731b485910515dbd1c9753205767d81c
2020-04-03T18:54:50Z
python
2022-02-09T15:26:42Z
changelogs/fragments/76124-hostname-debianstrategy.yml
closed
ansible/ansible
https://github.com/ansible/ansible
68,684
Cannot set hostname on ansible 2.9.6 with use=debian
##### SUMMARY Cannot set hostname on 2.9.6 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME [hostname](https://docs.ansible.com/ansible/latest/modules/hostname_module.html) ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` $ ansible --version ansible 2.9.6 config file = /home/porn/test/ansible/ansible.cfg configured module search path = ['/home/porn/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/porn/.local/lib/python3.6/site-packages/ansible executable location = /home/porn/.local/bin/ansible python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ``` $ ansible-config dump --only-changed ANSIBLE_PIPELINING(/home/porn/test/ansible/ansible.cfg) = True ANSIBLE_SSH_CONTROL_PATH(/home/porn/test/ansible/ansible.cfg) = /tmp/%%h-%%p-%%r DEFAULT_CALLBACK_WHITELIST(/home/porn/test/ansible/ansible.cfg) = ['profile_tasks'] DEFAULT_GATHERING(/home/porn/test/ansible/ansible.cfg) = smart DEFAULT_HOST_LIST(/home/porn/test/ansible/ansible.cfg) = ['/home/porn/test/ansible/hosts'] DEFAULT_REMOTE_PORT(/home/porn/test/ansible/ansible.cfg) = 29010 DEFAULT_REMOTE_USER(/home/porn/test/ansible/ansible.cfg) = ubuntu HOST_KEY_CHECKING(/home/porn/test/ansible/ansible.cfg) = False INTERPRETER_PYTHON(/home/porn/test/ansible/ansible.cfg) = /usr/bin/python3 ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ``` $ uname -a Linux pornmachine 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.4 LTS Release: 18.04 Codename: bionic ``` ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> try to set hostname on another ubuntu 18.04 <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Base setup hosts: workers tasks: - hostname: name: "worker" use: debian become: yes ``` ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> The hostname should be set ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> The command fails <!--- Paste verbatim command output between quotes --> ``` TASK [hostname] ***************************************************************************************************************************************************************************** task path: /home/porn/test/ansible/worker.yml:5 Friday 03 April 2020 20:27:27 +0200 (0:00:01.969) 0:00:02.022 ********** Using module file /home/porn/.local/lib/python3.6/site-packages/ansible/modules/system/hostname.py Pipelining is enabled. <35.183.134.118> ESTABLISH SSH CONNECTION FOR USER: ubuntu <35.183.134.118> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=29010 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o ControlPath=/tmp/%h-%p-%r 35.183.134.118 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-qzpktirbekasjjznnyfijducgvmqtgzq ; /usr/bin/python3'"'"'"'"'"'"'"'"' && sleep 0'"'"'' Escalation succeeded <35.183.134.118> (1, b'\n{"msg": "Command failed rc=1, out=, err=Unknown operation worker\\n", "failed": true, "invocation": {"module_args": {"name": "worker", "use": "debian"}}}\n', b'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /home/porn/.ssh/config\r\ndebug1: /home/porn/.ssh/config line 15: Applying options for 35.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 11218\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\n') <35.183.134.118> Failed to connect to the host via ssh: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017 debug1: Reading configuration data /home/porn/.ssh/config debug1: /home/porn/.ssh/config line 15: Applying options for 35.* debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: auto-mux: Trying existing master debug2: fd 3 setting O_NONBLOCK debug2: mux_client_hello_exchange: master version 4 debug3: mux_client_forwards: request forwardings: 0 local, 0 remote debug3: mux_client_request_session: entering debug3: mux_client_request_alive: entering debug3: mux_client_request_alive: done pid = 11218 debug3: mux_client_request_session: session request sent debug1: mux_client_request_session: master session id: 2 debug3: mux_client_read_packet: read header failed: Broken pipe debug2: Received exit status from master 1 fatal: [35.183.134.118]: FAILED! => { "changed": false, "invocation": { "module_args": { "name": "worker", "use": "debian" } }, "msg": "Command failed rc=1, out=, err=Unknown operation worker\n" } ``` The only workaround is ansible downgrade to 2.9.5 :(
https://github.com/ansible/ansible/issues/68684
https://github.com/ansible/ansible/pull/76929
522f9d1050baa3bdea4f3f9df5772c9acdc96f73
b1457329731b485910515dbd1c9753205767d81c
2020-04-03T18:54:50Z
python
2022-02-09T15:26:42Z
lib/ansible/modules/hostname.py
# -*- coding: utf-8 -*- # Copyright: (c) 2013, Hiroaki Nakamura <[email protected]> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type DOCUMENTATION = ''' --- module: hostname author: - Adrian Likins (@alikins) - Hideki Saito (@saito-hideki) version_added: "1.4" short_description: Manage hostname requirements: [ hostname ] description: - Set system's hostname. Supports most OSs/Distributions including those using C(systemd). - Windows, HP-UX, and AIX are not currently supported. notes: - This module does B(NOT) modify C(/etc/hosts). You need to modify it yourself using other modules such as M(ansible.builtin.template) or M(ansible.builtin.replace). - On macOS, this module uses C(scutil) to set C(HostName), C(ComputerName), and C(LocalHostName). Since C(LocalHostName) cannot contain spaces or most special characters, this module will replace characters when setting C(LocalHostName). options: name: description: - Name of the host. - If the value is a fully qualified domain name that does not resolve from the given host, this will cause the module to hang for a few seconds while waiting for the name resolution attempt to timeout. type: str required: true use: description: - Which strategy to use to update the hostname. - If not set we try to autodetect, but this can be problematic, particularly with containers as they can present misleading information. - Note that 'systemd' should be specified for RHEL/EL/CentOS 7+. Older distributions should use 'redhat'. choices: ['alpine', 'debian', 'freebsd', 'generic', 'macos', 'macosx', 'darwin', 'openbsd', 'openrc', 'redhat', 'sles', 'solaris', 'systemd'] type: str version_added: '2.9' extends_documentation_fragment: - action_common_attributes - action_common_attributes.facts attributes: check_mode: support: full diff_mode: support: full facts: support: full platform: platforms: posix ''' EXAMPLES = ''' - name: Set a hostname ansible.builtin.hostname: name: web01 - name: Set a hostname specifying strategy ansible.builtin.hostname: name: web01 use: systemd ''' import os import platform import socket import traceback from ansible.module_utils.basic import ( AnsibleModule, get_distribution, get_distribution_version, ) from ansible.module_utils.common.sys_info import get_platform_subclass from ansible.module_utils.facts.system.service_mgr import ServiceMgrFactCollector from ansible.module_utils.facts.utils import get_file_lines, get_file_content from ansible.module_utils._text import to_native, to_text from ansible.module_utils.six import PY3, text_type STRATS = { 'alpine': 'Alpine', 'debian': 'Debian', 'freebsd': 'FreeBSD', 'generic': 'Generic', 'macos': 'Darwin', 'macosx': 'Darwin', 'darwin': 'Darwin', 'openbsd': 'OpenBSD', 'openrc': 'OpenRC', 'redhat': 'RedHat', 'sles': 'SLES', 'solaris': 'Solaris', 'systemd': 'Systemd', } class UnimplementedStrategy(object): def __init__(self, module): self.module = module def update_current_and_permanent_hostname(self): self.unimplemented_error() def update_current_hostname(self): self.unimplemented_error() def update_permanent_hostname(self): self.unimplemented_error() def get_current_hostname(self): self.unimplemented_error() def set_current_hostname(self, name): self.unimplemented_error() def get_permanent_hostname(self): self.unimplemented_error() def set_permanent_hostname(self, name): self.unimplemented_error() def unimplemented_error(self): system = platform.system() distribution = get_distribution() if distribution is not None: msg_platform = '%s (%s)' % (system, distribution) else: msg_platform = system self.module.fail_json( msg='hostname module cannot be used on platform %s' % msg_platform) class Hostname(object): """ This is a generic Hostname manipulation class that is subclassed based on platform. A subclass may wish to set different strategy instance to self.strategy. All subclasses MUST define platform and distribution (which may be None). """ platform = 'Generic' distribution = None strategy_class = UnimplementedStrategy def __new__(cls, *args, **kwargs): new_cls = get_platform_subclass(Hostname) return super(cls, new_cls).__new__(new_cls) def __init__(self, module): self.module = module self.name = module.params['name'] self.use = module.params['use'] if self.use is not None: strat = globals()['%sStrategy' % STRATS[self.use]] self.strategy = strat(module) elif platform.system() == 'Linux' and ServiceMgrFactCollector.is_systemd_managed(module): # This is Linux and systemd is active self.strategy = SystemdStrategy(module) else: self.strategy = self.strategy_class(module) def update_current_and_permanent_hostname(self): return self.strategy.update_current_and_permanent_hostname() def get_current_hostname(self): return self.strategy.get_current_hostname() def set_current_hostname(self, name): self.strategy.set_current_hostname(name) def get_permanent_hostname(self): return self.strategy.get_permanent_hostname() def set_permanent_hostname(self, name): self.strategy.set_permanent_hostname(name) class BaseStrategy(object): def __init__(self, module): self.module = module self.changed = False def update_current_and_permanent_hostname(self): self.update_current_hostname() self.update_permanent_hostname() return self.changed def update_current_hostname(self): name = self.module.params['name'] current_name = self.get_current_hostname() if current_name != name: if not self.module.check_mode: self.set_current_hostname(name) self.changed = True def update_permanent_hostname(self): name = self.module.params['name'] permanent_name = self.get_permanent_hostname() if permanent_name != name: if not self.module.check_mode: self.set_permanent_hostname(name) self.changed = True def get_current_hostname(self): return self.get_permanent_hostname() def set_current_hostname(self, name): pass def get_permanent_hostname(self): raise NotImplementedError def set_permanent_hostname(self, name): raise NotImplementedError class CommandStrategy(BaseStrategy): COMMAND = 'hostname' def __init__(self, module): super(CommandStrategy, self).__init__(module) self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True) def get_current_hostname(self): cmd = [self.hostname_cmd] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) return to_native(out).strip() def set_current_hostname(self, name): cmd = [self.hostname_cmd, name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) def get_permanent_hostname(self): return 'UNKNOWN' def set_permanent_hostname(self, name): pass class FileStrategy(BaseStrategy): FILE = '/etc/hostname' def get_permanent_hostname(self): if not os.path.isfile(self.FILE): return '' try: return get_file_lines(self.FILE) except Exception as e: self.module.fail_json( msg="failed to read hostname: %s" % to_native(e), exception=traceback.format_exc()) def set_permanent_hostname(self, name): try: with open(self.FILE, 'w+') as f: f.write("%s\n" % name) except Exception as e: self.module.fail_json( msg="failed to update hostname: %s" % to_native(e), exception=traceback.format_exc()) class SLESStrategy(FileStrategy): """ This is a SLES Hostname strategy class - it edits the /etc/HOSTNAME file. """ FILE = '/etc/HOSTNAME' class RedHatStrategy(BaseStrategy): """ This is a Redhat Hostname strategy class - it edits the /etc/sysconfig/network file. """ NETWORK_FILE = '/etc/sysconfig/network' def get_permanent_hostname(self): try: for line in get_file_lines(self.NETWORK_FILE): line = to_native(line).strip() if line.startswith('HOSTNAME'): k, v = line.split('=') return v.strip() self.module.fail_json( "Unable to locate HOSTNAME entry in %s" % self.NETWORK_FILE ) except Exception as e: self.module.fail_json( msg="failed to read hostname: %s" % to_native(e), exception=traceback.format_exc()) def set_permanent_hostname(self, name): try: lines = [] found = False content = get_file_content(self.NETWORK_FILE, strip=False) or "" for line in content.splitlines(True): line = to_native(line) if line.strip().startswith('HOSTNAME'): lines.append("HOSTNAME=%s\n" % name) found = True else: lines.append(line) if not found: lines.append("HOSTNAME=%s\n" % name) with open(self.NETWORK_FILE, 'w+') as f: f.writelines(lines) except Exception as e: self.module.fail_json( msg="failed to update hostname: %s" % to_native(e), exception=traceback.format_exc()) class AlpineStrategy(FileStrategy): """ This is a Alpine Linux Hostname manipulation strategy class - it edits the /etc/hostname file then run hostname -F /etc/hostname. """ FILE = '/etc/hostname' COMMAND = 'hostname' def set_current_hostname(self, name): super(AlpineStrategy, self).set_current_hostname(name) hostname_cmd = self.module.get_bin_path(self.COMMAND, True) cmd = [hostname_cmd, '-F', self.FILE] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) class SystemdStrategy(BaseStrategy): """ This is a Systemd hostname manipulation strategy class - it uses the hostnamectl command. """ COMMAND = "hostnamectl" def __init__(self, module): super(SystemdStrategy, self).__init__(module) self.hostnamectl_cmd = self.module.get_bin_path(self.COMMAND, True) def get_current_hostname(self): cmd = [self.hostnamectl_cmd, '--transient', 'status'] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) return to_native(out).strip() def set_current_hostname(self, name): if len(name) > 64: self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name") cmd = [self.hostnamectl_cmd, '--transient', 'set-hostname', name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) def get_permanent_hostname(self): cmd = [self.hostnamectl_cmd, '--static', 'status'] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) return to_native(out).strip() def set_permanent_hostname(self, name): if len(name) > 64: self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name") cmd = [self.hostnamectl_cmd, '--pretty', 'set-hostname', name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) cmd = [self.hostnamectl_cmd, '--static', 'set-hostname', name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) class OpenRCStrategy(BaseStrategy): """ This is a Gentoo (OpenRC) Hostname manipulation strategy class - it edits the /etc/conf.d/hostname file. """ FILE = '/etc/conf.d/hostname' def get_permanent_hostname(self): if not os.path.isfile(self.FILE): return '' try: for line in get_file_lines(self.FILE): line = line.strip() if line.startswith('hostname='): return line[10:].strip('"') except Exception as e: self.module.fail_json( msg="failed to read hostname: %s" % to_native(e), exception=traceback.format_exc()) def set_permanent_hostname(self, name): try: lines = [x.strip() for x in get_file_lines(self.FILE)] for i, line in enumerate(lines): if line.startswith('hostname='): lines[i] = 'hostname="%s"' % name break with open(self.FILE, 'w') as f: f.write('\n'.join(lines) + '\n') except Exception as e: self.module.fail_json( msg="failed to update hostname: %s" % to_native(e), exception=traceback.format_exc()) class OpenBSDStrategy(FileStrategy): """ This is a OpenBSD family Hostname manipulation strategy class - it edits the /etc/myname file. """ FILE = '/etc/myname' class SolarisStrategy(BaseStrategy): """ This is a Solaris11 or later Hostname manipulation strategy class - it execute hostname command. """ COMMAND = "hostname" def __init__(self, module): super(SolarisStrategy, self).__init__(module) self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True) def set_current_hostname(self, name): cmd_option = '-t' cmd = [self.hostname_cmd, cmd_option, name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) def get_permanent_hostname(self): fmri = 'svc:/system/identity:node' pattern = 'config/nodename' cmd = '/usr/sbin/svccfg -s %s listprop -o value %s' % (fmri, pattern) rc, out, err = self.module.run_command(cmd, use_unsafe_shell=True) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) return to_native(out).strip() def set_permanent_hostname(self, name): cmd = [self.hostname_cmd, name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) class FreeBSDStrategy(BaseStrategy): """ This is a FreeBSD hostname manipulation strategy class - it edits the /etc/rc.conf.d/hostname file. """ FILE = '/etc/rc.conf.d/hostname' COMMAND = "hostname" def __init__(self, module): super(FreeBSDStrategy, self).__init__(module) self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True) def get_current_hostname(self): cmd = [self.hostname_cmd] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) return to_native(out).strip() def set_current_hostname(self, name): cmd = [self.hostname_cmd, name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) def get_permanent_hostname(self): if not os.path.isfile(self.FILE): return '' try: for line in get_file_lines(self.FILE): line = line.strip() if line.startswith('hostname='): return line[10:].strip('"') except Exception as e: self.module.fail_json( msg="failed to read hostname: %s" % to_native(e), exception=traceback.format_exc()) def set_permanent_hostname(self, name): try: if os.path.isfile(self.FILE): lines = [x.strip() for x in get_file_lines(self.FILE)] for i, line in enumerate(lines): if line.startswith('hostname='): lines[i] = 'hostname="%s"' % name break else: lines = ['hostname="%s"' % name] with open(self.FILE, 'w') as f: f.write('\n'.join(lines) + '\n') except Exception as e: self.module.fail_json( msg="failed to update hostname: %s" % to_native(e), exception=traceback.format_exc()) class DarwinStrategy(BaseStrategy): """ This is a macOS hostname manipulation strategy class. It uses /usr/sbin/scutil to set ComputerName, HostName, and LocalHostName. HostName corresponds to what most platforms consider to be hostname. It controls the name used on the command line and SSH. However, macOS also has LocalHostName and ComputerName settings. LocalHostName controls the Bonjour/ZeroConf name, used by services like AirDrop. This class implements a method, _scrub_hostname(), that mimics the transformations macOS makes on hostnames when enterened in the Sharing preference pane. It replaces spaces with dashes and removes all special characters. ComputerName is the name used for user-facing GUI services, like the System Preferences/Sharing pane and when users connect to the Mac over the network. """ def __init__(self, module): super(DarwinStrategy, self).__init__(module) self.scutil = self.module.get_bin_path('scutil', True) self.name_types = ('HostName', 'ComputerName', 'LocalHostName') self.scrubbed_name = self._scrub_hostname(self.module.params['name']) def _make_translation(self, replace_chars, replacement_chars, delete_chars): if PY3: return str.maketrans(replace_chars, replacement_chars, delete_chars) if not isinstance(replace_chars, text_type) or not isinstance(replacement_chars, text_type): raise ValueError('replace_chars and replacement_chars must both be strings') if len(replace_chars) != len(replacement_chars): raise ValueError('replacement_chars must be the same length as replace_chars') table = dict(zip((ord(c) for c in replace_chars), replacement_chars)) for char in delete_chars: table[ord(char)] = None return table def _scrub_hostname(self, name): """ LocalHostName only accepts valid DNS characters while HostName and ComputerName accept a much wider range of characters. This function aims to mimic how macOS translates a friendly name to the LocalHostName. """ # Replace all these characters with a single dash name = to_text(name) replace_chars = u'\'"~`!@#$%^&*(){}[]/=?+\\|-_ ' delete_chars = u".'" table = self._make_translation(replace_chars, u'-' * len(replace_chars), delete_chars) name = name.translate(table) # Replace multiple dashes with a single dash while '-' * 2 in name: name = name.replace('-' * 2, '') name = name.rstrip('-') return name def get_current_hostname(self): cmd = [self.scutil, '--get', 'HostName'] rc, out, err = self.module.run_command(cmd) if rc != 0 and 'HostName: not set' not in err: self.module.fail_json(msg="Failed to get current hostname rc=%d, out=%s, err=%s" % (rc, out, err)) return to_native(out).strip() def get_permanent_hostname(self): cmd = [self.scutil, '--get', 'ComputerName'] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Failed to get permanent hostname rc=%d, out=%s, err=%s" % (rc, out, err)) return to_native(out).strip() def set_permanent_hostname(self, name): for hostname_type in self.name_types: cmd = [self.scutil, '--set', hostname_type] if hostname_type == 'LocalHostName': cmd.append(to_native(self.scrubbed_name)) else: cmd.append(to_native(name)) rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Failed to set {3} to '{2}': {0} {1}".format(to_native(out), to_native(err), to_native(name), hostname_type)) def set_current_hostname(self, name): pass def update_current_hostname(self): pass def update_permanent_hostname(self): name = self.module.params['name'] # Get all the current host name values in the order of self.name_types all_names = tuple(self.module.run_command([self.scutil, '--get', name_type])[1].strip() for name_type in self.name_types) # Get the expected host name values based on the order in self.name_types expected_names = tuple(self.scrubbed_name if n == 'LocalHostName' else name for n in self.name_types) # Ensure all three names are updated if all_names != expected_names: if not self.module.check_mode: self.set_permanent_hostname(name) self.changed = True class SLESHostname(Hostname): platform = 'Linux' distribution = 'Sles' try: distribution_version = get_distribution_version() # cast to float may raise ValueError on non SLES, we use float for a little more safety over int if distribution_version and 10 <= float(distribution_version) <= 12: strategy_class = SLESStrategy else: raise ValueError() except ValueError: strategy_class = UnimplementedStrategy class RHELHostname(Hostname): platform = 'Linux' distribution = 'Redhat' strategy_class = RedHatStrategy class CentOSHostname(Hostname): platform = 'Linux' distribution = 'Centos' strategy_class = RedHatStrategy class AnolisOSHostname(Hostname): platform = 'Linux' distribution = 'Anolis' strategy_class = RedHatStrategy class CloudlinuxserverHostname(Hostname): platform = 'Linux' distribution = 'Cloudlinuxserver' strategy_class = RedHatStrategy class CloudlinuxHostname(Hostname): platform = 'Linux' distribution = 'Cloudlinux' strategy_class = RedHatStrategy class AlinuxHostname(Hostname): platform = 'Linux' distribution = 'Alinux' strategy_class = RedHatStrategy class ScientificHostname(Hostname): platform = 'Linux' distribution = 'Scientific' strategy_class = RedHatStrategy class OracleLinuxHostname(Hostname): platform = 'Linux' distribution = 'Oracle' strategy_class = RedHatStrategy class VirtuozzoLinuxHostname(Hostname): platform = 'Linux' distribution = 'Virtuozzo' strategy_class = RedHatStrategy class AmazonLinuxHostname(Hostname): platform = 'Linux' distribution = 'Amazon' strategy_class = RedHatStrategy class DebianHostname(Hostname): platform = 'Linux' distribution = 'Debian' strategy_class = FileStrategy class KylinHostname(Hostname): platform = 'Linux' distribution = 'Kylin' strategy_class = FileStrategy class CumulusHostname(Hostname): platform = 'Linux' distribution = 'Cumulus-linux' strategy_class = FileStrategy class KaliHostname(Hostname): platform = 'Linux' distribution = 'Kali' strategy_class = FileStrategy class ParrotHostname(Hostname): platform = 'Linux' distribution = 'Parrot' strategy_class = FileStrategy class UbuntuHostname(Hostname): platform = 'Linux' distribution = 'Ubuntu' strategy_class = FileStrategy class LinuxmintHostname(Hostname): platform = 'Linux' distribution = 'Linuxmint' strategy_class = FileStrategy class LinaroHostname(Hostname): platform = 'Linux' distribution = 'Linaro' strategy_class = FileStrategy class DevuanHostname(Hostname): platform = 'Linux' distribution = 'Devuan' strategy_class = FileStrategy class RaspbianHostname(Hostname): platform = 'Linux' distribution = 'Raspbian' strategy_class = FileStrategy class GentooHostname(Hostname): platform = 'Linux' distribution = 'Gentoo' strategy_class = OpenRCStrategy class ALTLinuxHostname(Hostname): platform = 'Linux' distribution = 'Altlinux' strategy_class = RedHatStrategy class AlpineLinuxHostname(Hostname): platform = 'Linux' distribution = 'Alpine' strategy_class = AlpineStrategy class OpenBSDHostname(Hostname): platform = 'OpenBSD' distribution = None strategy_class = OpenBSDStrategy class SolarisHostname(Hostname): platform = 'SunOS' distribution = None strategy_class = SolarisStrategy class FreeBSDHostname(Hostname): platform = 'FreeBSD' distribution = None strategy_class = FreeBSDStrategy class NetBSDHostname(Hostname): platform = 'NetBSD' distribution = None strategy_class = FreeBSDStrategy class NeonHostname(Hostname): platform = 'Linux' distribution = 'Neon' strategy_class = FileStrategy class DarwinHostname(Hostname): platform = 'Darwin' distribution = None strategy_class = DarwinStrategy class VoidLinuxHostname(Hostname): platform = 'Linux' distribution = 'Void' strategy_class = FileStrategy class PopHostname(Hostname): platform = 'Linux' distribution = 'Pop' strategy_class = FileStrategy class EurolinuxHostname(Hostname): platform = 'Linux' distribution = 'Eurolinux' strategy_class = RedHatStrategy def main(): module = AnsibleModule( argument_spec=dict( name=dict(type='str', required=True), use=dict(type='str', choices=STRATS.keys()) ), supports_check_mode=True, ) hostname = Hostname(module) name = module.params['name'] current_hostname = hostname.get_current_hostname() permanent_hostname = hostname.get_permanent_hostname() changed = hostname.update_current_and_permanent_hostname() if name != current_hostname: name_before = current_hostname elif name != permanent_hostname: name_before = permanent_hostname else: name_before = permanent_hostname # NOTE: socket.getfqdn() calls gethostbyaddr(socket.gethostname()), which can be # slow to return if the name does not resolve correctly. kw = dict(changed=changed, name=name, ansible_facts=dict(ansible_hostname=name.split('.')[0], ansible_nodename=name, ansible_fqdn=socket.getfqdn(), ansible_domain='.'.join(socket.getfqdn().split('.')[1:]))) if changed: kw['diff'] = {'after': 'hostname = ' + name + '\n', 'before': 'hostname = ' + name_before + '\n'} module.exit_json(**kw) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
68,684
Cannot set hostname on ansible 2.9.6 with use=debian
##### SUMMARY Cannot set hostname on 2.9.6 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME [hostname](https://docs.ansible.com/ansible/latest/modules/hostname_module.html) ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` $ ansible --version ansible 2.9.6 config file = /home/porn/test/ansible/ansible.cfg configured module search path = ['/home/porn/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/porn/.local/lib/python3.6/site-packages/ansible executable location = /home/porn/.local/bin/ansible python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ``` $ ansible-config dump --only-changed ANSIBLE_PIPELINING(/home/porn/test/ansible/ansible.cfg) = True ANSIBLE_SSH_CONTROL_PATH(/home/porn/test/ansible/ansible.cfg) = /tmp/%%h-%%p-%%r DEFAULT_CALLBACK_WHITELIST(/home/porn/test/ansible/ansible.cfg) = ['profile_tasks'] DEFAULT_GATHERING(/home/porn/test/ansible/ansible.cfg) = smart DEFAULT_HOST_LIST(/home/porn/test/ansible/ansible.cfg) = ['/home/porn/test/ansible/hosts'] DEFAULT_REMOTE_PORT(/home/porn/test/ansible/ansible.cfg) = 29010 DEFAULT_REMOTE_USER(/home/porn/test/ansible/ansible.cfg) = ubuntu HOST_KEY_CHECKING(/home/porn/test/ansible/ansible.cfg) = False INTERPRETER_PYTHON(/home/porn/test/ansible/ansible.cfg) = /usr/bin/python3 ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ``` $ uname -a Linux pornmachine 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.4 LTS Release: 18.04 Codename: bionic ``` ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> try to set hostname on another ubuntu 18.04 <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Base setup hosts: workers tasks: - hostname: name: "worker" use: debian become: yes ``` ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> The hostname should be set ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> The command fails <!--- Paste verbatim command output between quotes --> ``` TASK [hostname] ***************************************************************************************************************************************************************************** task path: /home/porn/test/ansible/worker.yml:5 Friday 03 April 2020 20:27:27 +0200 (0:00:01.969) 0:00:02.022 ********** Using module file /home/porn/.local/lib/python3.6/site-packages/ansible/modules/system/hostname.py Pipelining is enabled. <35.183.134.118> ESTABLISH SSH CONNECTION FOR USER: ubuntu <35.183.134.118> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=29010 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o ControlPath=/tmp/%h-%p-%r 35.183.134.118 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-qzpktirbekasjjznnyfijducgvmqtgzq ; /usr/bin/python3'"'"'"'"'"'"'"'"' && sleep 0'"'"'' Escalation succeeded <35.183.134.118> (1, b'\n{"msg": "Command failed rc=1, out=, err=Unknown operation worker\\n", "failed": true, "invocation": {"module_args": {"name": "worker", "use": "debian"}}}\n', b'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /home/porn/.ssh/config\r\ndebug1: /home/porn/.ssh/config line 15: Applying options for 35.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 11218\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\n') <35.183.134.118> Failed to connect to the host via ssh: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017 debug1: Reading configuration data /home/porn/.ssh/config debug1: /home/porn/.ssh/config line 15: Applying options for 35.* debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: auto-mux: Trying existing master debug2: fd 3 setting O_NONBLOCK debug2: mux_client_hello_exchange: master version 4 debug3: mux_client_forwards: request forwardings: 0 local, 0 remote debug3: mux_client_request_session: entering debug3: mux_client_request_alive: entering debug3: mux_client_request_alive: done pid = 11218 debug3: mux_client_request_session: session request sent debug1: mux_client_request_session: master session id: 2 debug3: mux_client_read_packet: read header failed: Broken pipe debug2: Received exit status from master 1 fatal: [35.183.134.118]: FAILED! => { "changed": false, "invocation": { "module_args": { "name": "worker", "use": "debian" } }, "msg": "Command failed rc=1, out=, err=Unknown operation worker\n" } ``` The only workaround is ansible downgrade to 2.9.5 :(
https://github.com/ansible/ansible/issues/68684
https://github.com/ansible/ansible/pull/76929
522f9d1050baa3bdea4f3f9df5772c9acdc96f73
b1457329731b485910515dbd1c9753205767d81c
2020-04-03T18:54:50Z
python
2022-02-09T15:26:42Z
test/integration/targets/hostname/tasks/Debian.yml
closed
ansible/ansible
https://github.com/ansible/ansible
68,684
Cannot set hostname on ansible 2.9.6 with use=debian
##### SUMMARY Cannot set hostname on 2.9.6 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME [hostname](https://docs.ansible.com/ansible/latest/modules/hostname_module.html) ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` $ ansible --version ansible 2.9.6 config file = /home/porn/test/ansible/ansible.cfg configured module search path = ['/home/porn/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/porn/.local/lib/python3.6/site-packages/ansible executable location = /home/porn/.local/bin/ansible python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ``` $ ansible-config dump --only-changed ANSIBLE_PIPELINING(/home/porn/test/ansible/ansible.cfg) = True ANSIBLE_SSH_CONTROL_PATH(/home/porn/test/ansible/ansible.cfg) = /tmp/%%h-%%p-%%r DEFAULT_CALLBACK_WHITELIST(/home/porn/test/ansible/ansible.cfg) = ['profile_tasks'] DEFAULT_GATHERING(/home/porn/test/ansible/ansible.cfg) = smart DEFAULT_HOST_LIST(/home/porn/test/ansible/ansible.cfg) = ['/home/porn/test/ansible/hosts'] DEFAULT_REMOTE_PORT(/home/porn/test/ansible/ansible.cfg) = 29010 DEFAULT_REMOTE_USER(/home/porn/test/ansible/ansible.cfg) = ubuntu HOST_KEY_CHECKING(/home/porn/test/ansible/ansible.cfg) = False INTERPRETER_PYTHON(/home/porn/test/ansible/ansible.cfg) = /usr/bin/python3 ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ``` $ uname -a Linux pornmachine 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.4 LTS Release: 18.04 Codename: bionic ``` ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> try to set hostname on another ubuntu 18.04 <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Base setup hosts: workers tasks: - hostname: name: "worker" use: debian become: yes ``` ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> The hostname should be set ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> The command fails <!--- Paste verbatim command output between quotes --> ``` TASK [hostname] ***************************************************************************************************************************************************************************** task path: /home/porn/test/ansible/worker.yml:5 Friday 03 April 2020 20:27:27 +0200 (0:00:01.969) 0:00:02.022 ********** Using module file /home/porn/.local/lib/python3.6/site-packages/ansible/modules/system/hostname.py Pipelining is enabled. <35.183.134.118> ESTABLISH SSH CONNECTION FOR USER: ubuntu <35.183.134.118> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=29010 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o ControlPath=/tmp/%h-%p-%r 35.183.134.118 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-qzpktirbekasjjznnyfijducgvmqtgzq ; /usr/bin/python3'"'"'"'"'"'"'"'"' && sleep 0'"'"'' Escalation succeeded <35.183.134.118> (1, b'\n{"msg": "Command failed rc=1, out=, err=Unknown operation worker\\n", "failed": true, "invocation": {"module_args": {"name": "worker", "use": "debian"}}}\n', b'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /home/porn/.ssh/config\r\ndebug1: /home/porn/.ssh/config line 15: Applying options for 35.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 11218\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\n') <35.183.134.118> Failed to connect to the host via ssh: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017 debug1: Reading configuration data /home/porn/.ssh/config debug1: /home/porn/.ssh/config line 15: Applying options for 35.* debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: auto-mux: Trying existing master debug2: fd 3 setting O_NONBLOCK debug2: mux_client_hello_exchange: master version 4 debug3: mux_client_forwards: request forwardings: 0 local, 0 remote debug3: mux_client_request_session: entering debug3: mux_client_request_alive: entering debug3: mux_client_request_alive: done pid = 11218 debug3: mux_client_request_session: session request sent debug1: mux_client_request_session: master session id: 2 debug3: mux_client_read_packet: read header failed: Broken pipe debug2: Received exit status from master 1 fatal: [35.183.134.118]: FAILED! => { "changed": false, "invocation": { "module_args": { "name": "worker", "use": "debian" } }, "msg": "Command failed rc=1, out=, err=Unknown operation worker\n" } ``` The only workaround is ansible downgrade to 2.9.5 :(
https://github.com/ansible/ansible/issues/68684
https://github.com/ansible/ansible/pull/76929
522f9d1050baa3bdea4f3f9df5772c9acdc96f73
b1457329731b485910515dbd1c9753205767d81c
2020-04-03T18:54:50Z
python
2022-02-09T15:26:42Z
test/integration/targets/hostname/tasks/main.yml
# Setting the hostname in our test containers doesn't work currently - when: ansible_facts.virtualization_type not in ('docker', 'container', 'containerd') block: - name: Include distribution specific variables include_vars: "{{ lookup('first_found', params) }}" vars: params: files: - "{{ ansible_facts.distribution }}.yml" - "{{ ansible_facts.os_family }}.yml" - default.yml paths: - "{{ role_path }}/vars" - name: Get current hostname command: hostname register: original - import_tasks: test_check_mode.yml - import_tasks: test_normal.yml - name: Include distribution specific tasks include_tasks: file: "{{ lookup('first_found', files) }}" vars: files: - "{{ ansible_facts.distribution }}.yml" - default.yml always: # Reset back to original hostname - name: Move back original file if it existed command: mv -f {{ _hostname_file }}.orig {{ _hostname_file }} when: hn_stat.stat.exists | default(False) - name: Delete the file if it never existed file: path: "{{ _hostname_file }}" state: absent when: not hn_stat.stat.exists | default(True) - name: Reset back to original hostname hostname: name: "{{ original.stdout }}" register: revert - name: Ensure original hostname was reset assert: that: - revert is changed
closed
ansible/ansible
https://github.com/ansible/ansible
68,684
Cannot set hostname on ansible 2.9.6 with use=debian
##### SUMMARY Cannot set hostname on 2.9.6 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME [hostname](https://docs.ansible.com/ansible/latest/modules/hostname_module.html) ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` $ ansible --version ansible 2.9.6 config file = /home/porn/test/ansible/ansible.cfg configured module search path = ['/home/porn/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/porn/.local/lib/python3.6/site-packages/ansible executable location = /home/porn/.local/bin/ansible python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ``` $ ansible-config dump --only-changed ANSIBLE_PIPELINING(/home/porn/test/ansible/ansible.cfg) = True ANSIBLE_SSH_CONTROL_PATH(/home/porn/test/ansible/ansible.cfg) = /tmp/%%h-%%p-%%r DEFAULT_CALLBACK_WHITELIST(/home/porn/test/ansible/ansible.cfg) = ['profile_tasks'] DEFAULT_GATHERING(/home/porn/test/ansible/ansible.cfg) = smart DEFAULT_HOST_LIST(/home/porn/test/ansible/ansible.cfg) = ['/home/porn/test/ansible/hosts'] DEFAULT_REMOTE_PORT(/home/porn/test/ansible/ansible.cfg) = 29010 DEFAULT_REMOTE_USER(/home/porn/test/ansible/ansible.cfg) = ubuntu HOST_KEY_CHECKING(/home/porn/test/ansible/ansible.cfg) = False INTERPRETER_PYTHON(/home/porn/test/ansible/ansible.cfg) = /usr/bin/python3 ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ``` $ uname -a Linux pornmachine 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.4 LTS Release: 18.04 Codename: bionic ``` ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> try to set hostname on another ubuntu 18.04 <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Base setup hosts: workers tasks: - hostname: name: "worker" use: debian become: yes ``` ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> The hostname should be set ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> The command fails <!--- Paste verbatim command output between quotes --> ``` TASK [hostname] ***************************************************************************************************************************************************************************** task path: /home/porn/test/ansible/worker.yml:5 Friday 03 April 2020 20:27:27 +0200 (0:00:01.969) 0:00:02.022 ********** Using module file /home/porn/.local/lib/python3.6/site-packages/ansible/modules/system/hostname.py Pipelining is enabled. <35.183.134.118> ESTABLISH SSH CONNECTION FOR USER: ubuntu <35.183.134.118> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=29010 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o ControlPath=/tmp/%h-%p-%r 35.183.134.118 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-qzpktirbekasjjznnyfijducgvmqtgzq ; /usr/bin/python3'"'"'"'"'"'"'"'"' && sleep 0'"'"'' Escalation succeeded <35.183.134.118> (1, b'\n{"msg": "Command failed rc=1, out=, err=Unknown operation worker\\n", "failed": true, "invocation": {"module_args": {"name": "worker", "use": "debian"}}}\n', b'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /home/porn/.ssh/config\r\ndebug1: /home/porn/.ssh/config line 15: Applying options for 35.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 11218\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\n') <35.183.134.118> Failed to connect to the host via ssh: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017 debug1: Reading configuration data /home/porn/.ssh/config debug1: /home/porn/.ssh/config line 15: Applying options for 35.* debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: auto-mux: Trying existing master debug2: fd 3 setting O_NONBLOCK debug2: mux_client_hello_exchange: master version 4 debug3: mux_client_forwards: request forwardings: 0 local, 0 remote debug3: mux_client_request_session: entering debug3: mux_client_request_alive: entering debug3: mux_client_request_alive: done pid = 11218 debug3: mux_client_request_session: session request sent debug1: mux_client_request_session: master session id: 2 debug3: mux_client_read_packet: read header failed: Broken pipe debug2: Received exit status from master 1 fatal: [35.183.134.118]: FAILED! => { "changed": false, "invocation": { "module_args": { "name": "worker", "use": "debian" } }, "msg": "Command failed rc=1, out=, err=Unknown operation worker\n" } ``` The only workaround is ansible downgrade to 2.9.5 :(
https://github.com/ansible/ansible/issues/68684
https://github.com/ansible/ansible/pull/76929
522f9d1050baa3bdea4f3f9df5772c9acdc96f73
b1457329731b485910515dbd1c9753205767d81c
2020-04-03T18:54:50Z
python
2022-02-09T15:26:42Z
test/integration/targets/hostname/tasks/test_normal.yml
- name: Run hostname module for real now hostname: name: crocodile.ansible.test.doesthiswork.net.example.com register: hn2 - name: Get hostname command: hostname register: current_after_hn2 - name: Run hostname again to ensure it does not change hostname: name: crocodile.ansible.test.doesthiswork.net.example.com register: hn3 - name: Get hostname command: hostname register: current_after_hn3 - assert: that: - hn2 is changed - hn3 is not changed - current_after_hn2.stdout == 'crocodile.ansible.test.doesthiswork.net.example.com' - current_after_hn2.stdout == current_after_hn2.stdout
closed
ansible/ansible
https://github.com/ansible/ansible
68,684
Cannot set hostname on ansible 2.9.6 with use=debian
##### SUMMARY Cannot set hostname on 2.9.6 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME [hostname](https://docs.ansible.com/ansible/latest/modules/hostname_module.html) ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` $ ansible --version ansible 2.9.6 config file = /home/porn/test/ansible/ansible.cfg configured module search path = ['/home/porn/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/porn/.local/lib/python3.6/site-packages/ansible executable location = /home/porn/.local/bin/ansible python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ``` $ ansible-config dump --only-changed ANSIBLE_PIPELINING(/home/porn/test/ansible/ansible.cfg) = True ANSIBLE_SSH_CONTROL_PATH(/home/porn/test/ansible/ansible.cfg) = /tmp/%%h-%%p-%%r DEFAULT_CALLBACK_WHITELIST(/home/porn/test/ansible/ansible.cfg) = ['profile_tasks'] DEFAULT_GATHERING(/home/porn/test/ansible/ansible.cfg) = smart DEFAULT_HOST_LIST(/home/porn/test/ansible/ansible.cfg) = ['/home/porn/test/ansible/hosts'] DEFAULT_REMOTE_PORT(/home/porn/test/ansible/ansible.cfg) = 29010 DEFAULT_REMOTE_USER(/home/porn/test/ansible/ansible.cfg) = ubuntu HOST_KEY_CHECKING(/home/porn/test/ansible/ansible.cfg) = False INTERPRETER_PYTHON(/home/porn/test/ansible/ansible.cfg) = /usr/bin/python3 ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ``` $ uname -a Linux pornmachine 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.4 LTS Release: 18.04 Codename: bionic ``` ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> try to set hostname on another ubuntu 18.04 <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Base setup hosts: workers tasks: - hostname: name: "worker" use: debian become: yes ``` ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> The hostname should be set ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> The command fails <!--- Paste verbatim command output between quotes --> ``` TASK [hostname] ***************************************************************************************************************************************************************************** task path: /home/porn/test/ansible/worker.yml:5 Friday 03 April 2020 20:27:27 +0200 (0:00:01.969) 0:00:02.022 ********** Using module file /home/porn/.local/lib/python3.6/site-packages/ansible/modules/system/hostname.py Pipelining is enabled. <35.183.134.118> ESTABLISH SSH CONNECTION FOR USER: ubuntu <35.183.134.118> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=29010 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o ControlPath=/tmp/%h-%p-%r 35.183.134.118 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-qzpktirbekasjjznnyfijducgvmqtgzq ; /usr/bin/python3'"'"'"'"'"'"'"'"' && sleep 0'"'"'' Escalation succeeded <35.183.134.118> (1, b'\n{"msg": "Command failed rc=1, out=, err=Unknown operation worker\\n", "failed": true, "invocation": {"module_args": {"name": "worker", "use": "debian"}}}\n', b'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /home/porn/.ssh/config\r\ndebug1: /home/porn/.ssh/config line 15: Applying options for 35.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 11218\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\n') <35.183.134.118> Failed to connect to the host via ssh: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017 debug1: Reading configuration data /home/porn/.ssh/config debug1: /home/porn/.ssh/config line 15: Applying options for 35.* debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: auto-mux: Trying existing master debug2: fd 3 setting O_NONBLOCK debug2: mux_client_hello_exchange: master version 4 debug3: mux_client_forwards: request forwardings: 0 local, 0 remote debug3: mux_client_request_session: entering debug3: mux_client_request_alive: entering debug3: mux_client_request_alive: done pid = 11218 debug3: mux_client_request_session: session request sent debug1: mux_client_request_session: master session id: 2 debug3: mux_client_read_packet: read header failed: Broken pipe debug2: Received exit status from master 1 fatal: [35.183.134.118]: FAILED! => { "changed": false, "invocation": { "module_args": { "name": "worker", "use": "debian" } }, "msg": "Command failed rc=1, out=, err=Unknown operation worker\n" } ``` The only workaround is ansible downgrade to 2.9.5 :(
https://github.com/ansible/ansible/issues/68684
https://github.com/ansible/ansible/pull/76929
522f9d1050baa3bdea4f3f9df5772c9acdc96f73
b1457329731b485910515dbd1c9753205767d81c
2020-04-03T18:54:50Z
python
2022-02-09T15:26:42Z
test/units/modules/test_hostname.py
from __future__ import (absolute_import, division, print_function) __metaclass__ = type import os import shutil import tempfile from units.compat.mock import patch, MagicMock, mock_open from ansible.module_utils import basic from ansible.module_utils.common._utils import get_all_subclasses from ansible.modules import hostname from units.modules.utils import ModuleTestCase, set_module_args from ansible.module_utils.six import PY2 class TestHostname(ModuleTestCase): @patch('os.path.isfile') def test_stategy_get_never_writes_in_check_mode(self, isfile): isfile.return_value = True set_module_args({'name': 'fooname', '_ansible_check_mode': True}) subclasses = get_all_subclasses(hostname.BaseStrategy) module = MagicMock() for cls in subclasses: instance = cls(module) instance.module.run_command = MagicMock() instance.module.run_command.return_value = (0, '', '') m = mock_open() builtins = 'builtins' if PY2: builtins = '__builtin__' with patch('%s.open' % builtins, m): instance.get_permanent_hostname() instance.get_current_hostname() self.assertFalse( m.return_value.write.called, msg='%s called write, should not have' % str(cls)) class TestRedhatStrategy(ModuleTestCase): def setUp(self): super(TestRedhatStrategy, self).setUp() self.testdir = tempfile.mkdtemp(prefix='ansible-test-hostname-') self.network_file = os.path.join(self.testdir, "network") def tearDown(self): super(TestRedhatStrategy, self).tearDown() shutil.rmtree(self.testdir, ignore_errors=True) @property def instance(self): self.module = MagicMock() instance = hostname.RedHatStrategy(self.module) instance.NETWORK_FILE = self.network_file return instance def test_get_permanent_hostname_missing(self): self.assertIsNone(self.instance.get_permanent_hostname()) self.assertTrue(self.module.fail_json.called) self.module.fail_json.assert_called_with( "Unable to locate HOSTNAME entry in %s" % self.network_file ) def test_get_permanent_hostname_line_missing(self): with open(self.network_file, "w") as f: f.write("# some other content\n") self.assertIsNone(self.instance.get_permanent_hostname()) self.module.fail_json.assert_called_with( "Unable to locate HOSTNAME entry in %s" % self.network_file ) def test_get_permanent_hostname_existing(self): with open(self.network_file, "w") as f: f.write( "some other content\n" "HOSTNAME=foobar\n" "more content\n" ) self.assertEqual(self.instance.get_permanent_hostname(), "foobar") def test_get_permanent_hostname_existing_whitespace(self): with open(self.network_file, "w") as f: f.write( "some other content\n" " HOSTNAME=foobar \n" "more content\n" ) self.assertEqual(self.instance.get_permanent_hostname(), "foobar") def test_set_permanent_hostname_missing(self): self.instance.set_permanent_hostname("foobar") with open(self.network_file) as f: self.assertEqual(f.read(), "HOSTNAME=foobar\n") def test_set_permanent_hostname_line_missing(self): with open(self.network_file, "w") as f: f.write("# some other content\n") self.instance.set_permanent_hostname("foobar") with open(self.network_file) as f: self.assertEqual(f.read(), "# some other content\nHOSTNAME=foobar\n") def test_set_permanent_hostname_existing(self): with open(self.network_file, "w") as f: f.write( "some other content\n" "HOSTNAME=spam\n" "more content\n" ) self.instance.set_permanent_hostname("foobar") with open(self.network_file) as f: self.assertEqual( f.read(), "some other content\n" "HOSTNAME=foobar\n" "more content\n" ) def test_set_permanent_hostname_existing_whitespace(self): with open(self.network_file, "w") as f: f.write( "some other content\n" " HOSTNAME=spam \n" "more content\n" ) self.instance.set_permanent_hostname("foobar") with open(self.network_file) as f: self.assertEqual( f.read(), "some other content\n" "HOSTNAME=foobar\n" "more content\n" )
closed
ansible/ansible
https://github.com/ansible/ansible
77,001
Error: No such container: inventory_hostname
### Summary Our CI is breaking when gathering facts: ``` TASK [Gathering Facts] ********************************************************* fatal: [debian-11]: FAILED! => { "ansible_facts": {}, "changed": false, "failed_modules": { "ansible.legacy.setup": { "failed": true, "module_stderr": "Error: No such container: inventory_hostname\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1 } } } MSG: The following modules failed to execute: ansible.legacy.setup ``` https://github.com/pulp/pulp_installer/runs/5134587244?check_suite_focus=true#step:7:147 it believe it started with this commit: https://github.com/ansible/ansible/commit/6d2d476113b3a26e46c9917e213f09494fbc0a13 ### Issue Type Bug Report ### Component Name ansible.legacy.setup ### Ansible Version ```console $ ansible --version ansible [core 2.13.0.dev0] config file = /home/runner/work/pulp_installer/pulp_installer/ansible.cfg configured module search path = ['/home/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /opt/hostedtoolcache/Python/3.9.10/x64/lib/python3.9/site-packages/ansible ansible collection location = /home/runner/.ansible/collections:/usr/share/ansible/collections executable location = /opt/hostedtoolcache/Python/3.9.10/x64/bin/ansible python version = 3.9.10 (main, Feb 3 2022, 07:33:39) [GCC 9.3.0] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment Debian 11, CentOS 8 stream, fedora 35, CentOS 7 ### Steps to Reproduce Run molecule test with docker images on ansible core 2.13.0.dev0 ### Expected Results Successfully gather facts ### Actual Results ```console TASK [Gathering Facts] ********************************************************* fatal: [debian-11]: FAILED! => { "ansible_facts": {}, "changed": false, "failed_modules": { "ansible.legacy.setup": { "failed": true, "module_stderr": "Error: No such container: inventory_hostname\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1 } } } MSG: The following modules failed to execute: ansible.legacy.setup ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77001
https://github.com/ansible/ansible/pull/77005
32f7490a2c841c3162e90712aa5faf4fdbeabda8
56edbd2bbb372a61dae2017923f1d8e33d1922d9
2022-02-10T16:55:06Z
python
2022-02-11T21:26:02Z
changelogs/fragments/handle_connection_cornercase.yml
closed
ansible/ansible
https://github.com/ansible/ansible
77,001
Error: No such container: inventory_hostname
### Summary Our CI is breaking when gathering facts: ``` TASK [Gathering Facts] ********************************************************* fatal: [debian-11]: FAILED! => { "ansible_facts": {}, "changed": false, "failed_modules": { "ansible.legacy.setup": { "failed": true, "module_stderr": "Error: No such container: inventory_hostname\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1 } } } MSG: The following modules failed to execute: ansible.legacy.setup ``` https://github.com/pulp/pulp_installer/runs/5134587244?check_suite_focus=true#step:7:147 it believe it started with this commit: https://github.com/ansible/ansible/commit/6d2d476113b3a26e46c9917e213f09494fbc0a13 ### Issue Type Bug Report ### Component Name ansible.legacy.setup ### Ansible Version ```console $ ansible --version ansible [core 2.13.0.dev0] config file = /home/runner/work/pulp_installer/pulp_installer/ansible.cfg configured module search path = ['/home/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /opt/hostedtoolcache/Python/3.9.10/x64/lib/python3.9/site-packages/ansible ansible collection location = /home/runner/.ansible/collections:/usr/share/ansible/collections executable location = /opt/hostedtoolcache/Python/3.9.10/x64/bin/ansible python version = 3.9.10 (main, Feb 3 2022, 07:33:39) [GCC 9.3.0] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment Debian 11, CentOS 8 stream, fedora 35, CentOS 7 ### Steps to Reproduce Run molecule test with docker images on ansible core 2.13.0.dev0 ### Expected Results Successfully gather facts ### Actual Results ```console TASK [Gathering Facts] ********************************************************* fatal: [debian-11]: FAILED! => { "ansible_facts": {}, "changed": false, "failed_modules": { "ansible.legacy.setup": { "failed": true, "module_stderr": "Error: No such container: inventory_hostname\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1 } } } MSG: The following modules failed to execute: ansible.legacy.setup ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77001
https://github.com/ansible/ansible/pull/77005
32f7490a2c841c3162e90712aa5faf4fdbeabda8
56edbd2bbb372a61dae2017923f1d8e33d1922d9
2022-02-10T16:55:06Z
python
2022-02-11T21:26:02Z
lib/ansible/playbook/play_context.py
# -*- coding: utf-8 -*- # (c) 2012-2014, Michael DeHaan <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type from ansible import constants as C from ansible import context from ansible.module_utils.compat.paramiko import paramiko from ansible.playbook.attribute import FieldAttribute from ansible.playbook.base import Base from ansible.plugins import get_plugin_class from ansible.utils.display import Display from ansible.utils.ssh_functions import check_for_controlpersist display = Display() __all__ = ['PlayContext'] TASK_ATTRIBUTE_OVERRIDES = ( 'become', 'become_user', 'become_pass', 'become_method', 'become_flags', 'connection', 'docker_extra_args', # TODO: remove 'delegate_to', 'no_log', 'remote_user', ) RESET_VARS = ( 'ansible_connection', 'ansible_user', 'ansible_host', 'ansible_port', # TODO: ??? 'ansible_docker_extra_args', 'ansible_ssh_host', 'ansible_ssh_pass', 'ansible_ssh_port', 'ansible_ssh_user', 'ansible_ssh_private_key_file', 'ansible_ssh_pipelining', 'ansible_ssh_executable', ) class PlayContext(Base): ''' This class is used to consolidate the connection information for hosts in a play and child tasks, where the task may override some connection/authentication information. ''' # base _module_compression = FieldAttribute(isa='string', default=C.DEFAULT_MODULE_COMPRESSION) _shell = FieldAttribute(isa='string') _executable = FieldAttribute(isa='string', default=C.DEFAULT_EXECUTABLE) # connection fields, some are inherited from Base: # (connection, port, remote_user, environment, no_log) _remote_addr = FieldAttribute(isa='string') _password = FieldAttribute(isa='string') _timeout = FieldAttribute(isa='int', default=C.DEFAULT_TIMEOUT) _connection_user = FieldAttribute(isa='string') _private_key_file = FieldAttribute(isa='string', default=C.DEFAULT_PRIVATE_KEY_FILE) _pipelining = FieldAttribute(isa='bool', default=C.ANSIBLE_PIPELINING) # networking modules _network_os = FieldAttribute(isa='string') # docker FIXME: remove these _docker_extra_args = FieldAttribute(isa='string') # ??? _connection_lockfd = FieldAttribute(isa='int') # privilege escalation fields _become = FieldAttribute(isa='bool') _become_method = FieldAttribute(isa='string') _become_user = FieldAttribute(isa='string') _become_pass = FieldAttribute(isa='string') _become_exe = FieldAttribute(isa='string', default=C.DEFAULT_BECOME_EXE) _become_flags = FieldAttribute(isa='string', default=C.DEFAULT_BECOME_FLAGS) _prompt = FieldAttribute(isa='string') # general flags _verbosity = FieldAttribute(isa='int', default=0) _only_tags = FieldAttribute(isa='set', default=set) _skip_tags = FieldAttribute(isa='set', default=set) _start_at_task = FieldAttribute(isa='string') _step = FieldAttribute(isa='bool', default=False) # "PlayContext.force_handlers should not be used, the calling code should be using play itself instead" _force_handlers = FieldAttribute(isa='bool', default=False) def __init__(self, play=None, passwords=None, connection_lockfd=None): # Note: play is really not optional. The only time it could be omitted is when we create # a PlayContext just so we can invoke its deserialize method to load it from a serialized # data source. super(PlayContext, self).__init__() if passwords is None: passwords = {} self.password = passwords.get('conn_pass', '') self.become_pass = passwords.get('become_pass', '') self._become_plugin = None self.prompt = '' self.success_key = '' # a file descriptor to be used during locking operations self.connection_lockfd = connection_lockfd # set options before play to allow play to override them if context.CLIARGS: self.set_attributes_from_cli() if play: self.set_attributes_from_play(play) def set_attributes_from_plugin(self, plugin): # generic derived from connection plugin, temporary for backwards compat, in the end we should not set play_context properties # get options for plugins options = C.config.get_configuration_definitions(get_plugin_class(plugin), plugin._load_name) for option in options: if option: flag = options[option].get('name') if flag: setattr(self, flag, plugin.get_option(flag)) def set_attributes_from_play(self, play): self.force_handlers = play.force_handlers def set_attributes_from_cli(self): ''' Configures this connection information instance with data from options specified by the user on the command line. These have a lower precedence than those set on the play or host. ''' if context.CLIARGS.get('timeout', False): self.timeout = int(context.CLIARGS['timeout']) # From the command line. These should probably be used directly by plugins instead # For now, they are likely to be moved to FieldAttribute defaults self.private_key_file = context.CLIARGS.get('private_key_file') # Else default self.verbosity = context.CLIARGS.get('verbosity') # Else default # Not every cli that uses PlayContext has these command line args so have a default self.start_at_task = context.CLIARGS.get('start_at_task', None) # Else default def set_task_and_variable_override(self, task, variables, templar): ''' Sets attributes from the task if they are set, which will override those from the play. :arg task: the task object with the parameters that were set on it :arg variables: variables from inventory :arg templar: templar instance if templating variables is needed ''' new_info = self.copy() # loop through a subset of attributes on the task object and set # connection fields based on their values for attr in TASK_ATTRIBUTE_OVERRIDES: if (attr_val := getattr(task, attr, None)) is not None: setattr(new_info, attr, attr_val) # next, use the MAGIC_VARIABLE_MAPPING dictionary to update this # connection info object with 'magic' variables from the variable list. # If the value 'ansible_delegated_vars' is in the variables, it means # we have a delegated-to host, so we check there first before looking # at the variables in general if task.delegate_to is not None: # In the case of a loop, the delegated_to host may have been # templated based on the loop variable, so we try and locate # the host name in the delegated variable dictionary here delegated_host_name = templar.template(task.delegate_to) delegated_vars = variables.get('ansible_delegated_vars', dict()).get(delegated_host_name, dict()) delegated_transport = C.DEFAULT_TRANSPORT for transport_var in C.MAGIC_VARIABLE_MAPPING.get('connection'): if transport_var in delegated_vars: delegated_transport = delegated_vars[transport_var] break # make sure this delegated_to host has something set for its remote # address, otherwise we default to connecting to it by name. This # may happen when users put an IP entry into their inventory, or if # they rely on DNS for a non-inventory hostname for address_var in ('ansible_%s_host' % delegated_transport,) + C.MAGIC_VARIABLE_MAPPING.get('remote_addr'): if address_var in delegated_vars: break else: display.debug("no remote address found for delegated host %s\nusing its name, so success depends on DNS resolution" % delegated_host_name) delegated_vars['ansible_host'] = delegated_host_name # reset the port back to the default if none was specified, to prevent # the delegated host from inheriting the original host's setting for port_var in ('ansible_%s_port' % delegated_transport,) + C.MAGIC_VARIABLE_MAPPING.get('port'): if port_var in delegated_vars: break else: if delegated_transport == 'winrm': delegated_vars['ansible_port'] = 5986 else: delegated_vars['ansible_port'] = C.DEFAULT_REMOTE_PORT # and likewise for the remote user for user_var in ('ansible_%s_user' % delegated_transport,) + C.MAGIC_VARIABLE_MAPPING.get('remote_user'): if user_var in delegated_vars and delegated_vars[user_var]: break else: delegated_vars['ansible_user'] = task.remote_user or self.remote_user else: delegated_vars = dict() # setup shell for exe_var in C.MAGIC_VARIABLE_MAPPING.get('executable'): if exe_var in variables: setattr(new_info, 'executable', variables.get(exe_var)) attrs_considered = [] for (attr, variable_names) in C.MAGIC_VARIABLE_MAPPING.items(): for variable_name in variable_names: if attr in attrs_considered: continue # if delegation task ONLY use delegated host vars, avoid delegated FOR host vars if task.delegate_to is not None: if isinstance(delegated_vars, dict) and variable_name in delegated_vars: setattr(new_info, attr, delegated_vars[variable_name]) attrs_considered.append(attr) elif variable_name in variables: setattr(new_info, attr, variables[variable_name]) attrs_considered.append(attr) # no else, as no other vars should be considered # become legacy updates -- from inventory file (inventory overrides # commandline) for become_pass_name in C.MAGIC_VARIABLE_MAPPING.get('become_pass'): if become_pass_name in variables: break # make sure we get port defaults if needed if new_info.port is None and C.DEFAULT_REMOTE_PORT is not None: new_info.port = int(C.DEFAULT_REMOTE_PORT) # special overrides for the connection setting if len(delegated_vars) > 0: # in the event that we were using local before make sure to reset the # connection type to the default transport for the delegated-to host, # if not otherwise specified for connection_type in C.MAGIC_VARIABLE_MAPPING.get('connection'): if connection_type in delegated_vars: break else: remote_addr_local = new_info.remote_addr in C.LOCALHOST inv_hostname_local = delegated_vars.get('inventory_hostname') in C.LOCALHOST if remote_addr_local and inv_hostname_local: setattr(new_info, 'connection', 'local') elif getattr(new_info, 'connection', None) == 'local' and (not remote_addr_local or not inv_hostname_local): setattr(new_info, 'connection', C.DEFAULT_TRANSPORT) # we store original in 'connection_user' for use of network/other modules that fallback to it as login user # connection_user to be deprecated once connection=local is removed for, as local resets remote_user if new_info.connection == 'local': if not new_info.connection_user: new_info.connection_user = new_info.remote_user # set no_log to default if it was not previously set if new_info.no_log is None: new_info.no_log = C.DEFAULT_NO_LOG if task.check_mode is not None: new_info.check_mode = task.check_mode if task.diff is not None: new_info.diff = task.diff return new_info def set_become_plugin(self, plugin): self._become_plugin = plugin def update_vars(self, variables): ''' Adds 'magic' variables relating to connections to the variable dictionary provided. In case users need to access from the play, this is a legacy from runner. ''' for prop, var_list in C.MAGIC_VARIABLE_MAPPING.items(): try: if 'become' in prop: continue var_val = getattr(self, prop) for var_opt in var_list: if var_opt not in variables and var_val is not None: variables[var_opt] = var_val except AttributeError: continue def _get_attr_connection(self): ''' connections are special, this takes care of responding correctly ''' conn_type = None if self._attributes['connection'] == 'smart': conn_type = 'ssh' # see if SSH can support ControlPersist if not use paramiko if not check_for_controlpersist('ssh') and paramiko is not None: conn_type = "paramiko" # if someone did `connection: persistent`, default it to using a persistent paramiko connection to avoid problems elif self._attributes['connection'] == 'persistent' and paramiko is not None: conn_type = 'paramiko' if conn_type: self.connection = conn_type return self._attributes['connection']
closed
ansible/ansible
https://github.com/ansible/ansible
76,676
set_fact behavior has changed with ansible release 2.12.1
### Summary Different ansible versions have different 'set_fact' behaviors. ### Issue Type Bug Report ### Component Name set_fact ### Ansible Version ```console ## Newest version on fedora 34 $ ansible --version ansible [core 2.12.1] config file = None configured module search path = ['/home/plewyllie/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/plewyllie/.local/lib/python3.9/site-packages/ansible ansible collection location = /home/plewyllie/.ansible/collections:/usr/share/ansible/collections executable location = /home/plewyllie/.local/bin/ansible python version = 3.9.9 (main, Nov 19 2021, 00:00:00) [GCC 11.2.1 20210728 (Red Hat 11.2.1-1)] jinja version = 2.11.3 libyaml = True ## Working version (on mac os x) ansible [core 2.11.6] config file = None configured module search path = ['/Users/plewyllie/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/Cellar/ansible/4.8.0/libexec/lib/python3.10/site-packages/ansible ansible collection location = /Users/plewyllie/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible python version = 3.10.0 (default, Oct 13 2021, 06:45:00) [Clang 13.0.0 (clang-1300.0.29.3)] jinja version = 3.0.2 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment Fedora. I used Mac OS X to confirm the 'older' behavior to avoid a downgrade process. However, the previous version that I had on Fedora was working in the same way, it's updating ansible that has caused this issue to appear. ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ### Test Playbook ``` --- ## Naming VMs for OCP ## Each VM going through this list should have the new fact "vm_name" set to "cluster_name"-"initial_vm_name" - hosts: localhost gather_facts: no tasks: - set_fact: vm_name : "{{ cluster_name }}-{{ hostvars[item].inventory_hostname }}" delegate_to: "{{ item }}" delegate_facts: True with_items: "{{ groups['vms'] }}" - debug: msg: "{{ item }} vm_name {{ hostvars[item].vm_name }}" with_items: "{{ groups['vms'] }}" ``` ### Test inventory ``` [vms] install lb [all:vars] cluster_name=ocp4-pieter ``` ### Expected Results With ansible [core 2.11.6] the output will be: ``` TASK [debug] ************************************************************************************************************* ok: [localhost] => (item=install) => { "msg": "install vm_name ocp4-pieter-install" } ok: [localhost] => (item=lb) => { "msg": "lb vm_name ocp4-pieter-lb" } ``` With the newer ansible [core 2.12.1] the output looks like this: ``` TASK [debug] ************************************************************************************************************************* ok: [localhost] => (item=install) => { "msg": "install vm_name ocp4-pieter-lb" } ok: [localhost] => (item=lb) => { "msg": "lb vm_name ocp4-pieter-lb" } ``` I have also tried with the development branch: ``` (venv) [plewyllie@fedoravm ansible-issue]$ ansible --version [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible [core 2.13.0.dev0] (devel b984dd9c59) last updated 2022/01/07 16:00:28 (GMT +200) config file = None configured module search path = ['/home/plewyllie/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/plewyllie/ansible-dev/ansible/lib/ansible ansible collection location = /home/plewyllie/.ansible/collections:/usr/share/ansible/collections executable location = /home/plewyllie/ansible-dev/ansible/bin/ansible python version = 3.9.9 (main, Nov 19 2021, 00:00:00) [GCC 11.2.1 20210728 (Red Hat 11.2.1-1)] jinja version = 3.0.3 libyaml = True (venv) [plewyllie@fedoravm ansible-issue]$ ansible-playbook -i inventory main.yml [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. PLAY [localhost] ********************************************************************************************************* TASK [ansible.builtin.set_fact] ****************************************************************************************** ok: [localhost -> install] => (item=install) ok: [localhost -> lb] => (item=lb) TASK [debug] ************************************************************************************************************* ok: [localhost] => (item=install) => { "msg": "install vm_name ocp4-pieter-lb" } ok: [localhost] => (item=lb) => { "msg": "lb vm_name ocp4-pieter-lb" } PLAY RECAP *************************************************************************************************************** localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` ### Actual Results ```console TASK [debug] ************************************************************************************************************************* ok: [localhost] => (item=install) => { "msg": "install vm_name ocp4-pieter-lb" } ok: [localhost] => (item=lb) => { "msg": "lb vm_name ocp4-pieter-lb" } ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76676
https://github.com/ansible/ansible/pull/77008
56edbd2bbb372a61dae2017923f1d8e33d1922d9
c9d3518d2f3812787e1627806b5fa93f8fae48a6
2022-01-07T15:04:07Z
python
2022-02-11T23:19:38Z
changelogs/fragments/fix_fax_delegation_loops.yml
closed
ansible/ansible
https://github.com/ansible/ansible
76,676
set_fact behavior has changed with ansible release 2.12.1
### Summary Different ansible versions have different 'set_fact' behaviors. ### Issue Type Bug Report ### Component Name set_fact ### Ansible Version ```console ## Newest version on fedora 34 $ ansible --version ansible [core 2.12.1] config file = None configured module search path = ['/home/plewyllie/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/plewyllie/.local/lib/python3.9/site-packages/ansible ansible collection location = /home/plewyllie/.ansible/collections:/usr/share/ansible/collections executable location = /home/plewyllie/.local/bin/ansible python version = 3.9.9 (main, Nov 19 2021, 00:00:00) [GCC 11.2.1 20210728 (Red Hat 11.2.1-1)] jinja version = 2.11.3 libyaml = True ## Working version (on mac os x) ansible [core 2.11.6] config file = None configured module search path = ['/Users/plewyllie/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/Cellar/ansible/4.8.0/libexec/lib/python3.10/site-packages/ansible ansible collection location = /Users/plewyllie/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible python version = 3.10.0 (default, Oct 13 2021, 06:45:00) [Clang 13.0.0 (clang-1300.0.29.3)] jinja version = 3.0.2 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment Fedora. I used Mac OS X to confirm the 'older' behavior to avoid a downgrade process. However, the previous version that I had on Fedora was working in the same way, it's updating ansible that has caused this issue to appear. ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ### Test Playbook ``` --- ## Naming VMs for OCP ## Each VM going through this list should have the new fact "vm_name" set to "cluster_name"-"initial_vm_name" - hosts: localhost gather_facts: no tasks: - set_fact: vm_name : "{{ cluster_name }}-{{ hostvars[item].inventory_hostname }}" delegate_to: "{{ item }}" delegate_facts: True with_items: "{{ groups['vms'] }}" - debug: msg: "{{ item }} vm_name {{ hostvars[item].vm_name }}" with_items: "{{ groups['vms'] }}" ``` ### Test inventory ``` [vms] install lb [all:vars] cluster_name=ocp4-pieter ``` ### Expected Results With ansible [core 2.11.6] the output will be: ``` TASK [debug] ************************************************************************************************************* ok: [localhost] => (item=install) => { "msg": "install vm_name ocp4-pieter-install" } ok: [localhost] => (item=lb) => { "msg": "lb vm_name ocp4-pieter-lb" } ``` With the newer ansible [core 2.12.1] the output looks like this: ``` TASK [debug] ************************************************************************************************************************* ok: [localhost] => (item=install) => { "msg": "install vm_name ocp4-pieter-lb" } ok: [localhost] => (item=lb) => { "msg": "lb vm_name ocp4-pieter-lb" } ``` I have also tried with the development branch: ``` (venv) [plewyllie@fedoravm ansible-issue]$ ansible --version [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible [core 2.13.0.dev0] (devel b984dd9c59) last updated 2022/01/07 16:00:28 (GMT +200) config file = None configured module search path = ['/home/plewyllie/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/plewyllie/ansible-dev/ansible/lib/ansible ansible collection location = /home/plewyllie/.ansible/collections:/usr/share/ansible/collections executable location = /home/plewyllie/ansible-dev/ansible/bin/ansible python version = 3.9.9 (main, Nov 19 2021, 00:00:00) [GCC 11.2.1 20210728 (Red Hat 11.2.1-1)] jinja version = 3.0.3 libyaml = True (venv) [plewyllie@fedoravm ansible-issue]$ ansible-playbook -i inventory main.yml [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. PLAY [localhost] ********************************************************************************************************* TASK [ansible.builtin.set_fact] ****************************************************************************************** ok: [localhost -> install] => (item=install) ok: [localhost -> lb] => (item=lb) TASK [debug] ************************************************************************************************************* ok: [localhost] => (item=install) => { "msg": "install vm_name ocp4-pieter-lb" } ok: [localhost] => (item=lb) => { "msg": "lb vm_name ocp4-pieter-lb" } PLAY RECAP *************************************************************************************************************** localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` ### Actual Results ```console TASK [debug] ************************************************************************************************************************* ok: [localhost] => (item=install) => { "msg": "install vm_name ocp4-pieter-lb" } ok: [localhost] => (item=lb) => { "msg": "lb vm_name ocp4-pieter-lb" } ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76676
https://github.com/ansible/ansible/pull/77008
56edbd2bbb372a61dae2017923f1d8e33d1922d9
c9d3518d2f3812787e1627806b5fa93f8fae48a6
2022-01-07T15:04:07Z
python
2022-02-11T23:19:38Z
lib/ansible/executor/task_executor.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import os import pty import time import json import signal import subprocess import sys import termios import traceback from ansible import constants as C from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleConnectionFailure, AnsibleActionFail, AnsibleActionSkip from ansible.executor.task_result import TaskResult from ansible.executor.module_common import get_action_args_with_defaults from ansible.module_utils.parsing.convert_bool import boolean from ansible.module_utils.six import binary_type from ansible.module_utils._text import to_text, to_native from ansible.module_utils.connection import write_to_file_descriptor from ansible.playbook.conditional import Conditional from ansible.playbook.task import Task from ansible.plugins.loader import become_loader, cliconf_loader, connection_loader, httpapi_loader, netconf_loader, terminal_loader from ansible.template import Templar from ansible.utils.collection_loader import AnsibleCollectionConfig from ansible.utils.listify import listify_lookup_plugin_terms from ansible.utils.unsafe_proxy import to_unsafe_text, wrap_var from ansible.vars.clean import namespace_facts, clean_facts from ansible.utils.display import Display from ansible.utils.vars import combine_vars, isidentifier display = Display() RETURN_VARS = [x for x in C.MAGIC_VARIABLE_MAPPING.items() if 'become' not in x and '_pass' not in x] __all__ = ['TaskExecutor'] class TaskTimeoutError(BaseException): pass def task_timeout(signum, frame): raise TaskTimeoutError def remove_omit(task_args, omit_token): ''' Remove args with a value equal to the ``omit_token`` recursively to align with now having suboptions in the argument_spec ''' if not isinstance(task_args, dict): return task_args new_args = {} for i in task_args.items(): if i[1] == omit_token: continue elif isinstance(i[1], dict): new_args[i[0]] = remove_omit(i[1], omit_token) elif isinstance(i[1], list): new_args[i[0]] = [remove_omit(v, omit_token) for v in i[1]] else: new_args[i[0]] = i[1] return new_args class TaskExecutor: ''' This is the main worker class for the executor pipeline, which handles loading an action plugin to actually dispatch the task to a given host. This class roughly corresponds to the old Runner() class. ''' def __init__(self, host, task, job_vars, play_context, new_stdin, loader, shared_loader_obj, final_q): self._host = host self._task = task self._job_vars = job_vars self._play_context = play_context self._new_stdin = new_stdin self._loader = loader self._shared_loader_obj = shared_loader_obj self._connection = None self._final_q = final_q self._loop_eval_error = None self._task.squash() def run(self): ''' The main executor entrypoint, where we determine if the specified task requires looping and either runs the task with self._run_loop() or self._execute(). After that, the returned results are parsed and returned as a dict. ''' display.debug("in run() - task %s" % self._task._uuid) try: try: items = self._get_loop_items() except AnsibleUndefinedVariable as e: # save the error raised here for use later items = None self._loop_eval_error = e if items is not None: if len(items) > 0: item_results = self._run_loop(items) # create the overall result item res = dict(results=item_results) # loop through the item results and set the global changed/failed/skipped result flags based on any item. res['skipped'] = True for item in item_results: if 'changed' in item and item['changed'] and not res.get('changed'): res['changed'] = True if res['skipped'] and ('skipped' not in item or ('skipped' in item and not item['skipped'])): res['skipped'] = False if 'failed' in item and item['failed']: item_ignore = item.pop('_ansible_ignore_errors') if not res.get('failed'): res['failed'] = True res['msg'] = 'One or more items failed' self._task.ignore_errors = item_ignore elif self._task.ignore_errors and not item_ignore: self._task.ignore_errors = item_ignore # ensure to accumulate these for array in ['warnings', 'deprecations']: if array in item and item[array]: if array not in res: res[array] = [] if not isinstance(item[array], list): item[array] = [item[array]] res[array] = res[array] + item[array] del item[array] if not res.get('failed', False): res['msg'] = 'All items completed' if res['skipped']: res['msg'] = 'All items skipped' else: res = dict(changed=False, skipped=True, skipped_reason='No items in the list', results=[]) else: display.debug("calling self._execute()") res = self._execute() display.debug("_execute() done") # make sure changed is set in the result, if it's not present if 'changed' not in res: res['changed'] = False def _clean_res(res, errors='surrogate_or_strict'): if isinstance(res, binary_type): return to_unsafe_text(res, errors=errors) elif isinstance(res, dict): for k in res: try: res[k] = _clean_res(res[k], errors=errors) except UnicodeError: if k == 'diff': # If this is a diff, substitute a replacement character if the value # is undecodable as utf8. (Fix #21804) display.warning("We were unable to decode all characters in the module return data." " Replaced some in an effort to return as much as possible") res[k] = _clean_res(res[k], errors='surrogate_then_replace') else: raise elif isinstance(res, list): for idx, item in enumerate(res): res[idx] = _clean_res(item, errors=errors) return res display.debug("dumping result to json") res = _clean_res(res) display.debug("done dumping result, returning") return res except AnsibleError as e: return dict(failed=True, msg=wrap_var(to_text(e, nonstring='simplerepr')), _ansible_no_log=self._play_context.no_log) except Exception as e: return dict(failed=True, msg='Unexpected failure during module execution.', exception=to_text(traceback.format_exc()), stdout='', _ansible_no_log=self._play_context.no_log) finally: try: self._connection.close() except AttributeError: pass except Exception as e: display.debug(u"error closing connection: %s" % to_text(e)) def _get_loop_items(self): ''' Loads a lookup plugin to handle the with_* portion of a task (if specified), and returns the items result. ''' # get search path for this task to pass to lookup plugins self._job_vars['ansible_search_path'] = self._task.get_search_path() # ensure basedir is always in (dwim already searches here but we need to display it) if self._loader.get_basedir() not in self._job_vars['ansible_search_path']: self._job_vars['ansible_search_path'].append(self._loader.get_basedir()) templar = Templar(loader=self._loader, variables=self._job_vars) items = None loop_cache = self._job_vars.get('_ansible_loop_cache') if loop_cache is not None: # _ansible_loop_cache may be set in `get_vars` when calculating `delegate_to` # to avoid reprocessing the loop items = loop_cache elif self._task.loop_with: if self._task.loop_with in self._shared_loader_obj.lookup_loader: fail = True if self._task.loop_with == 'first_found': # first_found loops are special. If the item is undefined then we want to fall through to the next value rather than failing. fail = False loop_terms = listify_lookup_plugin_terms(terms=self._task.loop, templar=templar, loader=self._loader, fail_on_undefined=fail, convert_bare=False) if not fail: loop_terms = [t for t in loop_terms if not templar.is_template(t)] # get lookup mylookup = self._shared_loader_obj.lookup_loader.get(self._task.loop_with, loader=self._loader, templar=templar) # give lookup task 'context' for subdir (mostly needed for first_found) for subdir in ['template', 'var', 'file']: # TODO: move this to constants? if subdir in self._task.action: break setattr(mylookup, '_subdir', subdir + 's') # run lookup items = wrap_var(mylookup.run(terms=loop_terms, variables=self._job_vars, wantlist=True)) else: raise AnsibleError("Unexpected failure in finding the lookup named '%s' in the available lookup plugins" % self._task.loop_with) elif self._task.loop is not None: items = templar.template(self._task.loop) if not isinstance(items, list): raise AnsibleError( "Invalid data passed to 'loop', it requires a list, got this instead: %s." " Hint: If you passed a list/dict of just one element," " try adding wantlist=True to your lookup invocation or use q/query instead of lookup." % items ) return items def _run_loop(self, items): ''' Runs the task with the loop items specified and collates the result into an array named 'results' which is inserted into the final result along with the item for which the loop ran. ''' results = [] # make copies of the job vars and task so we can add the item to # the variables and re-validate the task with the item variable # task_vars = self._job_vars.copy() task_vars = self._job_vars loop_var = 'item' index_var = None label = None loop_pause = 0 extended = False templar = Templar(loader=self._loader, variables=self._job_vars) # FIXME: move this to the object itself to allow post_validate to take care of templating (loop_control.post_validate) if self._task.loop_control: loop_var = templar.template(self._task.loop_control.loop_var) index_var = templar.template(self._task.loop_control.index_var) loop_pause = templar.template(self._task.loop_control.pause) extended = templar.template(self._task.loop_control.extended) # This may be 'None',so it is templated below after we ensure a value and an item is assigned label = self._task.loop_control.label # ensure we always have a label if label is None: label = '{{' + loop_var + '}}' if loop_var in task_vars: display.warning(u"%s: The loop variable '%s' is already in use. " u"You should set the `loop_var` value in the `loop_control` option for the task" u" to something else to avoid variable collisions and unexpected behavior." % (self._task, loop_var)) ran_once = False no_log = False items_len = len(items) for item_index, item in enumerate(items): task_vars['ansible_loop_var'] = loop_var task_vars[loop_var] = item if index_var: task_vars['ansible_index_var'] = index_var task_vars[index_var] = item_index if extended: task_vars['ansible_loop'] = { 'allitems': items, 'index': item_index + 1, 'index0': item_index, 'first': item_index == 0, 'last': item_index + 1 == items_len, 'length': items_len, 'revindex': items_len - item_index, 'revindex0': items_len - item_index - 1, } try: task_vars['ansible_loop']['nextitem'] = items[item_index + 1] except IndexError: pass if item_index - 1 >= 0: task_vars['ansible_loop']['previtem'] = items[item_index - 1] # Update template vars to reflect current loop iteration templar.available_variables = task_vars # pause between loop iterations if loop_pause and ran_once: try: time.sleep(float(loop_pause)) except ValueError as e: raise AnsibleError('Invalid pause value: %s, produced error: %s' % (loop_pause, to_native(e))) else: ran_once = True try: tmp_task = self._task.copy(exclude_parent=True, exclude_tasks=True) tmp_task._parent = self._task._parent tmp_play_context = self._play_context.copy() except AnsibleParserError as e: results.append(dict(failed=True, msg=to_text(e))) continue # now we swap the internal task and play context with their copies, # execute, and swap them back so we can do the next iteration cleanly (self._task, tmp_task) = (tmp_task, self._task) (self._play_context, tmp_play_context) = (tmp_play_context, self._play_context) res = self._execute(variables=task_vars) task_fields = self._task.dump_attrs() (self._task, tmp_task) = (tmp_task, self._task) (self._play_context, tmp_play_context) = (tmp_play_context, self._play_context) # update 'general no_log' based on specific no_log no_log = no_log or tmp_task.no_log # now update the result with the item info, and append the result # to the list of results res[loop_var] = item res['ansible_loop_var'] = loop_var if index_var: res[index_var] = item_index res['ansible_index_var'] = index_var if extended: res['ansible_loop'] = task_vars['ansible_loop'] res['_ansible_item_result'] = True res['_ansible_ignore_errors'] = task_fields.get('ignore_errors') # gets templated here unlike rest of loop_control fields, depends on loop_var above try: res['_ansible_item_label'] = templar.template(label, cache=False) except AnsibleUndefinedVariable as e: res.update({ 'failed': True, 'msg': 'Failed to template loop_control.label: %s' % to_text(e) }) tr = TaskResult( self._host.name, self._task._uuid, res, task_fields=task_fields, ) if tr.is_failed() or tr.is_unreachable(): self._final_q.send_callback('v2_runner_item_on_failed', tr) elif tr.is_skipped(): self._final_q.send_callback('v2_runner_item_on_skipped', tr) else: if getattr(self._task, 'diff', False): self._final_q.send_callback('v2_on_file_diff', tr) if self._task.action not in C._ACTION_INVENTORY_TASKS: self._final_q.send_callback('v2_runner_item_on_ok', tr) results.append(res) del task_vars[loop_var] # clear 'connection related' plugin variables for next iteration if self._connection: clear_plugins = { 'connection': self._connection._load_name, 'shell': self._connection._shell._load_name } if self._connection.become: clear_plugins['become'] = self._connection.become._load_name for plugin_type, plugin_name in clear_plugins.items(): for var in C.config.get_plugin_vars(plugin_type, plugin_name): if var in task_vars and var not in self._job_vars: del task_vars[var] self._task.no_log = no_log return results def _execute(self, variables=None): ''' The primary workhorse of the executor system, this runs the task on the specified host (which may be the delegated_to host) and handles the retry/until and block rescue/always execution ''' if variables is None: variables = self._job_vars templar = Templar(loader=self._loader, variables=variables) context_validation_error = None # a certain subset of variables exist. tempvars = variables.copy() try: # TODO: remove play_context as this does not take delegation nor loops correctly into account, # the task itself should hold the correct values for connection/shell/become/terminal plugin options to finalize. # Kept for now for backwards compatibility and a few functions that are still exclusive to it. # apply the given task's information to the connection info, # which may override some fields already set by the play or # the options specified on the command line self._play_context = self._play_context.set_task_and_variable_override(task=self._task, variables=variables, templar=templar) # fields set from the play/task may be based on variables, so we have to # do the same kind of post validation step on it here before we use it. self._play_context.post_validate(templar=templar) # now that the play context is finalized, if the remote_addr is not set # default to using the host's address field as the remote address if not self._play_context.remote_addr: self._play_context.remote_addr = self._host.address # We also add "magic" variables back into the variables dict to make sure self._play_context.update_vars(tempvars) except AnsibleError as e: # save the error, which we'll raise later if we don't end up # skipping this task during the conditional evaluation step context_validation_error = e no_log = self._play_context.no_log # Evaluate the conditional (if any) for this task, which we do before running # the final task post-validation. We do this before the post validation due to # the fact that the conditional may specify that the task be skipped due to a # variable not being present which would otherwise cause validation to fail try: if not self._task.evaluate_conditional(templar, tempvars): display.debug("when evaluation is False, skipping this task") return dict(changed=False, skipped=True, skip_reason='Conditional result was False', _ansible_no_log=no_log) except AnsibleError as e: # loop error takes precedence if self._loop_eval_error is not None: # Display the error from the conditional as well to prevent # losing information useful for debugging. display.v(to_text(e)) raise self._loop_eval_error # pylint: disable=raising-bad-type raise # Not skipping, if we had loop error raised earlier we need to raise it now to halt the execution of this task if self._loop_eval_error is not None: raise self._loop_eval_error # pylint: disable=raising-bad-type # if we ran into an error while setting up the PlayContext, raise it now, unless is known issue with delegation # and undefined vars (correct values are in cvars later on and connection plugins, if still error, blows up there) if context_validation_error is not None: raiseit = True if self._task.delegate_to: if isinstance(context_validation_error, AnsibleUndefinedVariable): raiseit = False elif isinstance(context_validation_error, AnsibleParserError): # parser error, might be cause by undef too orig_exc = getattr(context_validation_error, 'orig_exc', None) if isinstance(orig_exc, AnsibleUndefinedVariable): raiseit = False if raiseit: raise context_validation_error # pylint: disable=raising-bad-type # set templar to use temp variables until loop is evaluated templar.available_variables = tempvars # if this task is a TaskInclude, we just return now with a success code so the # main thread can expand the task list for the given host if self._task.action in C._ACTION_ALL_INCLUDE_TASKS: include_args = self._task.args.copy() include_file = include_args.pop('_raw_params', None) if not include_file: return dict(failed=True, msg="No include file was specified to the include") include_file = templar.template(include_file) return dict(include=include_file, include_args=include_args) # if this task is a IncludeRole, we just return now with a success code so the main thread can expand the task list for the given host elif self._task.action in C._ACTION_INCLUDE_ROLE: include_args = self._task.args.copy() return dict(include_args=include_args) # Now we do final validation on the task, which sets all fields to their final values. try: self._task.post_validate(templar=templar) except AnsibleError as e: raise except Exception: return dict(changed=False, failed=True, _ansible_no_log=no_log, exception=to_text(traceback.format_exc())) if '_variable_params' in self._task.args: variable_params = self._task.args.pop('_variable_params') if isinstance(variable_params, dict): if C.INJECT_FACTS_AS_VARS: display.warning("Using a variable for a task's 'args' is unsafe in some situations " "(see https://docs.ansible.com/ansible/devel/reference_appendices/faq.html#argsplat-unsafe)") variable_params.update(self._task.args) self._task.args = variable_params # update no_log to task value, now that we have it templated no_log = self._task.no_log # free tempvars up, not used anymore, cvars and vars_copy should be mainly used after this point # updating the original 'variables' at the end tempvars = {} # setup cvars copy, used for all connection related templating if self._task.delegate_to: # use vars from delegated host (which already include task vars) instead of original host cvars = variables.get('ansible_delegated_vars', {}).get(self._task.delegate_to, {}) else: # just use normal host vars cvars = variables templar.available_variables = cvars # use magic var if it exists, if not, let task inheritance do it's thing. if cvars.get('ansible_connection') is not None: current_connection = templar.template(cvars['ansible_connection']) else: current_connection = self._task.connection # get the connection and the handler for this execution if (not self._connection or not getattr(self._connection, 'connected', False) or self._connection._load_name != current_connection or # pc compare, left here for old plugins, but should be irrelevant for those # using get_option, since they are cleared each iteration. self._play_context.remote_addr != self._connection._play_context.remote_addr): self._connection = self._get_connection(cvars, templar, current_connection) else: # if connection is reused, its _play_context is no longer valid and needs # to be replaced with the one templated above, in case other data changed self._connection._play_context = self._play_context plugin_vars = self._set_connection_options(cvars, templar) # make a copy of the job vars here, as we update them here and later, # but don't want to polute original vars_copy = variables.copy() # update with connection info (i.e ansible_host/ansible_user) self._connection.update_vars(vars_copy) templar.available_variables = vars_copy # TODO: eventually remove as pc is taken out of the resolution path # feed back into pc to ensure plugins not using get_option can get correct value self._connection._play_context = self._play_context.set_task_and_variable_override(task=self._task, variables=vars_copy, templar=templar) # TODO: eventually remove this block as this should be a 'consequence' of 'forced_local' modules # special handling for python interpreter for network_os, default to ansible python unless overriden if 'ansible_network_os' in cvars and 'ansible_python_interpreter' not in cvars: # this also avoids 'python discovery' cvars['ansible_python_interpreter'] = sys.executable # get handler self._handler = self._get_action_handler(connection=self._connection, templar=templar) # Apply default params for action/module, if present self._task.args = get_action_args_with_defaults( self._task.resolved_action, self._task.args, self._task.module_defaults, templar, action_groups=self._task._parent._play._action_groups ) # And filter out any fields which were set to default(omit), and got the omit token value omit_token = variables.get('omit') if omit_token is not None: self._task.args = remove_omit(self._task.args, omit_token) # Read some values from the task, so that we can modify them if need be if self._task.until: retries = self._task.retries if retries is None: retries = 3 elif retries <= 0: retries = 1 else: retries += 1 else: retries = 1 delay = self._task.delay if delay < 0: delay = 1 display.debug("starting attempt loop") result = None for attempt in range(1, retries + 1): display.debug("running the handler") try: if self._task.timeout: old_sig = signal.signal(signal.SIGALRM, task_timeout) signal.alarm(self._task.timeout) result = self._handler.run(task_vars=vars_copy) except (AnsibleActionFail, AnsibleActionSkip) as e: return e.result except AnsibleConnectionFailure as e: return dict(unreachable=True, msg=to_text(e)) except TaskTimeoutError as e: msg = 'The %s action failed to execute in the expected time frame (%d) and was terminated' % (self._task.action, self._task.timeout) return dict(failed=True, msg=msg) finally: if self._task.timeout: signal.alarm(0) old_sig = signal.signal(signal.SIGALRM, old_sig) self._handler.cleanup() display.debug("handler run complete") # preserve no log result["_ansible_no_log"] = no_log if self._task.action not in C._ACTION_WITH_CLEAN_FACTS: result = wrap_var(result) # update the local copy of vars with the registered value, if specified, # or any facts which may have been generated by the module execution if self._task.register: if not isidentifier(self._task.register): raise AnsibleError("Invalid variable name in 'register' specified: '%s'" % self._task.register) vars_copy[self._task.register] = result if self._task.async_val > 0: if self._task.poll > 0 and not result.get('skipped') and not result.get('failed'): result = self._poll_async_result(result=result, templar=templar, task_vars=vars_copy) if result.get('failed'): self._final_q.send_callback( 'v2_runner_on_async_failed', TaskResult(self._host.name, self._task._uuid, result, task_fields=self._task.dump_attrs())) else: self._final_q.send_callback( 'v2_runner_on_async_ok', TaskResult(self._host.name, self._task._uuid, result, task_fields=self._task.dump_attrs())) # ensure no log is preserved result["_ansible_no_log"] = no_log # helper methods for use below in evaluating changed/failed_when def _evaluate_changed_when_result(result): if self._task.changed_when is not None and self._task.changed_when: cond = Conditional(loader=self._loader) cond.when = self._task.changed_when result['changed'] = cond.evaluate_conditional(templar, vars_copy) def _evaluate_failed_when_result(result): if self._task.failed_when: cond = Conditional(loader=self._loader) cond.when = self._task.failed_when failed_when_result = cond.evaluate_conditional(templar, vars_copy) result['failed_when_result'] = result['failed'] = failed_when_result else: failed_when_result = False return failed_when_result if 'ansible_facts' in result and self._task.action not in C._ACTION_DEBUG: if self._task.action in C._ACTION_WITH_CLEAN_FACTS: if self._task.delegate_to and self._task.delegate_facts: if '_ansible_delegated_vars' in vars_copy: vars_copy['_ansible_delegated_vars'].update(result['ansible_facts']) else: vars_copy['_ansible_delegated_vars'] = result['ansible_facts'] else: vars_copy.update(result['ansible_facts']) else: # TODO: cleaning of facts should eventually become part of taskresults instead of vars af = wrap_var(result['ansible_facts']) vars_copy['ansible_facts'] = combine_vars(vars_copy.get('ansible_facts', {}), namespace_facts(af)) if C.INJECT_FACTS_AS_VARS: vars_copy.update(clean_facts(af)) # set the failed property if it was missing. if 'failed' not in result: # rc is here for backwards compatibility and modules that use it instead of 'failed' if 'rc' in result and result['rc'] not in [0, "0"]: result['failed'] = True else: result['failed'] = False # Make attempts and retries available early to allow their use in changed/failed_when if self._task.until: result['attempts'] = attempt # set the changed property if it was missing. if 'changed' not in result: result['changed'] = False if self._task.action not in C._ACTION_WITH_CLEAN_FACTS: result = wrap_var(result) # re-update the local copy of vars with the registered value, if specified, # or any facts which may have been generated by the module execution # This gives changed/failed_when access to additional recently modified # attributes of result if self._task.register: vars_copy[self._task.register] = result # if we didn't skip this task, use the helpers to evaluate the changed/ # failed_when properties if 'skipped' not in result: try: condname = 'changed' _evaluate_changed_when_result(result) condname = 'failed' _evaluate_failed_when_result(result) except AnsibleError as e: result['failed'] = True result['%s_when_result' % condname] = to_text(e) if retries > 1: cond = Conditional(loader=self._loader) cond.when = self._task.until if cond.evaluate_conditional(templar, vars_copy): break else: # no conditional check, or it failed, so sleep for the specified time if attempt < retries: result['_ansible_retry'] = True result['retries'] = retries display.debug('Retrying task, attempt %d of %d' % (attempt, retries)) self._final_q.send_callback( 'v2_runner_retry', TaskResult( self._host.name, self._task._uuid, result, task_fields=self._task.dump_attrs() ) ) time.sleep(delay) self._handler = self._get_action_handler(connection=self._connection, templar=templar) else: if retries > 1: # we ran out of attempts, so mark the result as failed result['attempts'] = retries - 1 result['failed'] = True if self._task.action not in C._ACTION_WITH_CLEAN_FACTS: result = wrap_var(result) # do the final update of the local variables here, for both registered # values and any facts which may have been created if self._task.register: variables[self._task.register] = result if 'ansible_facts' in result and self._task.action not in C._ACTION_DEBUG: if self._task.action in C._ACTION_WITH_CLEAN_FACTS: if self._task.delegate_to and self._task.delegate_facts: if '_ansible_delegated_vars' in variables: variables['_ansible_delegated_vars'].update(result['ansible_facts']) else: variables['_ansible_delegated_vars'] = result['ansible_facts'] else: variables.update(result['ansible_facts']) else: # TODO: cleaning of facts should eventually become part of taskresults instead of vars af = wrap_var(result['ansible_facts']) variables['ansible_facts'] = combine_vars(variables.get('ansible_facts', {}), namespace_facts(af)) if C.INJECT_FACTS_AS_VARS: variables.update(clean_facts(af)) # save the notification target in the result, if it was specified, as # this task may be running in a loop in which case the notification # may be item-specific, ie. "notify: service {{item}}" if self._task.notify is not None: result['_ansible_notify'] = self._task.notify # add the delegated vars to the result, so we can reference them # on the results side without having to do any further templating # also now add conneciton vars results when delegating if self._task.delegate_to: result["_ansible_delegated_vars"] = {'ansible_delegated_host': self._task.delegate_to} for k in plugin_vars: result["_ansible_delegated_vars"][k] = cvars.get(k) # note: here for callbacks that rely on this info to display delegation for requireshed in ('ansible_host', 'ansible_port', 'ansible_user', 'ansible_connection'): if requireshed not in result["_ansible_delegated_vars"] and requireshed in cvars: result["_ansible_delegated_vars"][requireshed] = cvars.get(requireshed) # and return display.debug("attempt loop complete, returning result") return result def _poll_async_result(self, result, templar, task_vars=None): ''' Polls for the specified JID to be complete ''' if task_vars is None: task_vars = self._job_vars async_jid = result.get('ansible_job_id') if async_jid is None: return dict(failed=True, msg="No job id was returned by the async task") # Create a new pseudo-task to run the async_status module, and run # that (with a sleep for "poll" seconds between each retry) until the # async time limit is exceeded. async_task = Task.load(dict(action='async_status', args={'jid': async_jid}, environment=self._task.environment)) # FIXME: this is no longer the case, normal takes care of all, see if this can just be generalized # Because this is an async task, the action handler is async. However, # we need the 'normal' action handler for the status check, so get it # now via the action_loader async_handler = self._shared_loader_obj.action_loader.get( 'ansible.legacy.async_status', task=async_task, connection=self._connection, play_context=self._play_context, loader=self._loader, templar=templar, shared_loader_obj=self._shared_loader_obj, ) time_left = self._task.async_val while time_left > 0: time.sleep(self._task.poll) try: async_result = async_handler.run(task_vars=task_vars) # We do not bail out of the loop in cases where the failure # is associated with a parsing error. The async_runner can # have issues which result in a half-written/unparseable result # file on disk, which manifests to the user as a timeout happening # before it's time to timeout. if (int(async_result.get('finished', 0)) == 1 or ('failed' in async_result and async_result.get('_ansible_parsed', False)) or 'skipped' in async_result): break except Exception as e: # Connections can raise exceptions during polling (eg, network bounce, reboot); these should be non-fatal. # On an exception, call the connection's reset method if it has one # (eg, drop/recreate WinRM connection; some reused connections are in a broken state) display.vvvv("Exception during async poll, retrying... (%s)" % to_text(e)) display.debug("Async poll exception was:\n%s" % to_text(traceback.format_exc())) try: async_handler._connection.reset() except AttributeError: pass # Little hack to raise the exception if we've exhausted the timeout period time_left -= self._task.poll if time_left <= 0: raise else: time_left -= self._task.poll self._final_q.send_callback( 'v2_runner_on_async_poll', TaskResult( self._host.name, async_task._uuid, async_result, task_fields=async_task.dump_attrs(), ), ) if int(async_result.get('finished', 0)) != 1: if async_result.get('_ansible_parsed'): return dict(failed=True, msg="async task did not complete within the requested time - %ss" % self._task.async_val, async_result=async_result) else: return dict(failed=True, msg="async task produced unparseable results", async_result=async_result) else: # If the async task finished, automatically cleanup the temporary # status file left behind. cleanup_task = Task.load( { 'async_status': { 'jid': async_jid, 'mode': 'cleanup', }, 'environment': self._task.environment, } ) cleanup_handler = self._shared_loader_obj.action_loader.get( 'ansible.legacy.async_status', task=cleanup_task, connection=self._connection, play_context=self._play_context, loader=self._loader, templar=templar, shared_loader_obj=self._shared_loader_obj, ) cleanup_handler.run(task_vars=task_vars) cleanup_handler.cleanup(force=True) async_handler.cleanup(force=True) return async_result def _get_become(self, name): become = become_loader.get(name) if not become: raise AnsibleError("Invalid become method specified, could not find matching plugin: '%s'. " "Use `ansible-doc -t become -l` to list available plugins." % name) return become def _get_connection(self, cvars, templar, current_connection): ''' Reads the connection property for the host, and returns the correct connection object from the list of connection plugins ''' self._play_context.connection = current_connection # TODO: play context has logic to update the connection for 'smart' # (default value, will chose between ssh and paramiko) and 'persistent' # (really paramiko), eventually this should move to task object itself. conn_type = self._play_context.connection connection, plugin_load_context = self._shared_loader_obj.connection_loader.get_with_context( conn_type, self._play_context, self._new_stdin, task_uuid=self._task._uuid, ansible_playbook_pid=to_text(os.getppid()) ) if not connection: raise AnsibleError("the connection plugin '%s' was not found" % conn_type) # load become plugin if needed if cvars.get('ansible_become') is not None: become = boolean(templar.template(cvars['ansible_become'])) else: become = self._task.become if become: if cvars.get('ansible_become_method'): become_plugin = self._get_become(templar.template(cvars['ansible_become_method'])) else: become_plugin = self._get_become(self._task.become_method) try: connection.set_become_plugin(become_plugin) except AttributeError: # Older connection plugin that does not support set_become_plugin pass if getattr(connection.become, 'require_tty', False) and not getattr(connection, 'has_tty', False): raise AnsibleError( "The '%s' connection does not provide a TTY which is required for the selected " "become plugin: %s." % (conn_type, become_plugin.name) ) # Backwards compat for connection plugins that don't support become plugins # Just do this unconditionally for now, we could move it inside of the # AttributeError above later self._play_context.set_become_plugin(become_plugin.name) # Also backwards compat call for those still using play_context self._play_context.set_attributes_from_plugin(connection) if any(((connection.supports_persistence and C.USE_PERSISTENT_CONNECTIONS), connection.force_persistence)): self._play_context.timeout = connection.get_option('persistent_command_timeout') display.vvvv('attempting to start connection', host=self._play_context.remote_addr) display.vvvv('using connection plugin %s' % connection.transport, host=self._play_context.remote_addr) options = self._get_persistent_connection_options(connection, cvars, templar) socket_path = start_connection(self._play_context, options, self._task._uuid) display.vvvv('local domain socket path is %s' % socket_path, host=self._play_context.remote_addr) setattr(connection, '_socket_path', socket_path) return connection def _get_persistent_connection_options(self, connection, final_vars, templar): option_vars = C.config.get_plugin_vars('connection', connection._load_name) plugin = connection._sub_plugin if plugin.get('type'): option_vars.extend(C.config.get_plugin_vars(plugin['type'], plugin['name'])) options = {} for k in option_vars: if k in final_vars: options[k] = templar.template(final_vars[k]) return options def _set_plugin_options(self, plugin_type, variables, templar, task_keys): try: plugin = getattr(self._connection, '_%s' % plugin_type) except AttributeError: # Some plugins are assigned to private attrs, ``become`` is not plugin = getattr(self._connection, plugin_type) option_vars = C.config.get_plugin_vars(plugin_type, plugin._load_name) options = {} for k in option_vars: if k in variables: options[k] = templar.template(variables[k]) # TODO move to task method? plugin.set_options(task_keys=task_keys, var_options=options) return option_vars def _set_connection_options(self, variables, templar): # keep list of variable names possibly consumed varnames = [] # grab list of usable vars for this plugin option_vars = C.config.get_plugin_vars('connection', self._connection._load_name) varnames.extend(option_vars) # create dict of 'templated vars' options = {'_extras': {}} for k in option_vars: if k in variables: options[k] = templar.template(variables[k]) # add extras if plugin supports them if getattr(self._connection, 'allow_extras', False): for k in variables: if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options: options['_extras'][k] = templar.template(variables[k]) task_keys = self._task.dump_attrs() # The task_keys 'timeout' attr is the task's timeout, not the connection timeout. # The connection timeout is threaded through the play_context for now. task_keys['timeout'] = self._play_context.timeout if self._play_context.password: # The connection password is threaded through the play_context for # now. This is something we ultimately want to avoid, but the first # step is to get connection plugins pulling the password through the # config system instead of directly accessing play_context. task_keys['password'] = self._play_context.password # Prevent task retries from overriding connection retries del(task_keys['retries']) # set options with 'templated vars' specific to this plugin and dependent ones self._connection.set_options(task_keys=task_keys, var_options=options) varnames.extend(self._set_plugin_options('shell', variables, templar, task_keys)) if self._connection.become is not None: if self._play_context.become_pass: # FIXME: eventually remove from task and play_context, here for backwards compat # keep out of play objects to avoid accidental disclosure, only become plugin should have # The become pass is already in the play_context if given on # the CLI (-K). Make the plugin aware of it in this case. task_keys['become_pass'] = self._play_context.become_pass varnames.extend(self._set_plugin_options('become', variables, templar, task_keys)) # FOR BACKWARDS COMPAT: for option in ('become_user', 'become_flags', 'become_exe', 'become_pass'): try: setattr(self._play_context, option, self._connection.become.get_option(option)) except KeyError: pass # some plugins don't support all base flags self._play_context.prompt = self._connection.become.prompt return varnames def _get_action_handler(self, connection, templar): ''' Returns the correct action plugin to handle the requestion task action ''' module_collection, separator, module_name = self._task.action.rpartition(".") module_prefix = module_name.split('_')[0] if module_collection: # For network modules, which look for one action plugin per platform, look for the # action plugin in the same collection as the module by prefixing the action plugin # with the same collection. network_action = "{0}.{1}".format(module_collection, module_prefix) else: network_action = module_prefix collections = self._task.collections # let action plugin override module, fallback to 'normal' action plugin otherwise if self._shared_loader_obj.action_loader.has_plugin(self._task.action, collection_list=collections): handler_name = self._task.action elif all((module_prefix in C.NETWORK_GROUP_MODULES, self._shared_loader_obj.action_loader.has_plugin(network_action, collection_list=collections))): handler_name = network_action display.vvvv("Using network group action {handler} for {action}".format(handler=handler_name, action=self._task.action), host=self._play_context.remote_addr) else: # use ansible.legacy.normal to allow (historic) local action_plugins/ override without collections search handler_name = 'ansible.legacy.normal' collections = None # until then, we don't want the task's collection list to be consulted; use the builtin handler = self._shared_loader_obj.action_loader.get( handler_name, task=self._task, connection=connection, play_context=self._play_context, loader=self._loader, templar=templar, shared_loader_obj=self._shared_loader_obj, collection_list=collections ) if not handler: raise AnsibleError("the handler '%s' was not found" % handler_name) return handler def start_connection(play_context, variables, task_uuid): ''' Starts the persistent connection ''' candidate_paths = [C.ANSIBLE_CONNECTION_PATH or os.path.dirname(sys.argv[0])] candidate_paths.extend(os.environ.get('PATH', '').split(os.pathsep)) for dirname in candidate_paths: ansible_connection = os.path.join(dirname, 'ansible-connection') if os.path.isfile(ansible_connection): display.vvvv("Found ansible-connection at path {0}".format(ansible_connection)) break else: raise AnsibleError("Unable to find location of 'ansible-connection'. " "Please set or check the value of ANSIBLE_CONNECTION_PATH") env = os.environ.copy() env.update({ # HACK; most of these paths may change during the controller's lifetime # (eg, due to late dynamic role includes, multi-playbook execution), without a way # to invalidate/update, ansible-connection won't always see the same plugins the controller # can. 'ANSIBLE_BECOME_PLUGINS': become_loader.print_paths(), 'ANSIBLE_CLICONF_PLUGINS': cliconf_loader.print_paths(), 'ANSIBLE_COLLECTIONS_PATH': to_native(os.pathsep.join(AnsibleCollectionConfig.collection_paths)), 'ANSIBLE_CONNECTION_PLUGINS': connection_loader.print_paths(), 'ANSIBLE_HTTPAPI_PLUGINS': httpapi_loader.print_paths(), 'ANSIBLE_NETCONF_PLUGINS': netconf_loader.print_paths(), 'ANSIBLE_TERMINAL_PLUGINS': terminal_loader.print_paths(), }) python = sys.executable master, slave = pty.openpty() p = subprocess.Popen( [python, ansible_connection, to_text(os.getppid()), to_text(task_uuid)], stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env ) os.close(slave) # We need to set the pty into noncanonical mode. This ensures that we # can receive lines longer than 4095 characters (plus newline) without # truncating. old = termios.tcgetattr(master) new = termios.tcgetattr(master) new[3] = new[3] & ~termios.ICANON try: termios.tcsetattr(master, termios.TCSANOW, new) write_to_file_descriptor(master, variables) write_to_file_descriptor(master, play_context.serialize()) (stdout, stderr) = p.communicate() finally: termios.tcsetattr(master, termios.TCSANOW, old) os.close(master) if p.returncode == 0: result = json.loads(to_text(stdout, errors='surrogate_then_replace')) else: try: result = json.loads(to_text(stderr, errors='surrogate_then_replace')) except getattr(json.decoder, 'JSONDecodeError', ValueError): # JSONDecodeError only available on Python 3.5+ result = {'error': to_text(stderr, errors='surrogate_then_replace')} if 'messages' in result: for level, message in result['messages']: if level == 'log': display.display(message, log_only=True) elif level in ('debug', 'v', 'vv', 'vvv', 'vvvv', 'vvvvv', 'vvvvvv'): getattr(display, level)(message, host=play_context.remote_addr) else: if hasattr(display, level): getattr(display, level)(message) else: display.vvvv(message, host=play_context.remote_addr) if 'error' in result: if play_context.verbosity > 2: if result.get('exception'): msg = "The full traceback is:\n" + result['exception'] display.display(msg, color=C.COLOR_ERROR) raise AnsibleError(result['error']) return result['socket_path']
closed
ansible/ansible
https://github.com/ansible/ansible
76,676
set_fact behavior has changed with ansible release 2.12.1
### Summary Different ansible versions have different 'set_fact' behaviors. ### Issue Type Bug Report ### Component Name set_fact ### Ansible Version ```console ## Newest version on fedora 34 $ ansible --version ansible [core 2.12.1] config file = None configured module search path = ['/home/plewyllie/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/plewyllie/.local/lib/python3.9/site-packages/ansible ansible collection location = /home/plewyllie/.ansible/collections:/usr/share/ansible/collections executable location = /home/plewyllie/.local/bin/ansible python version = 3.9.9 (main, Nov 19 2021, 00:00:00) [GCC 11.2.1 20210728 (Red Hat 11.2.1-1)] jinja version = 2.11.3 libyaml = True ## Working version (on mac os x) ansible [core 2.11.6] config file = None configured module search path = ['/Users/plewyllie/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/Cellar/ansible/4.8.0/libexec/lib/python3.10/site-packages/ansible ansible collection location = /Users/plewyllie/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible python version = 3.10.0 (default, Oct 13 2021, 06:45:00) [Clang 13.0.0 (clang-1300.0.29.3)] jinja version = 3.0.2 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment Fedora. I used Mac OS X to confirm the 'older' behavior to avoid a downgrade process. However, the previous version that I had on Fedora was working in the same way, it's updating ansible that has caused this issue to appear. ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ### Test Playbook ``` --- ## Naming VMs for OCP ## Each VM going through this list should have the new fact "vm_name" set to "cluster_name"-"initial_vm_name" - hosts: localhost gather_facts: no tasks: - set_fact: vm_name : "{{ cluster_name }}-{{ hostvars[item].inventory_hostname }}" delegate_to: "{{ item }}" delegate_facts: True with_items: "{{ groups['vms'] }}" - debug: msg: "{{ item }} vm_name {{ hostvars[item].vm_name }}" with_items: "{{ groups['vms'] }}" ``` ### Test inventory ``` [vms] install lb [all:vars] cluster_name=ocp4-pieter ``` ### Expected Results With ansible [core 2.11.6] the output will be: ``` TASK [debug] ************************************************************************************************************* ok: [localhost] => (item=install) => { "msg": "install vm_name ocp4-pieter-install" } ok: [localhost] => (item=lb) => { "msg": "lb vm_name ocp4-pieter-lb" } ``` With the newer ansible [core 2.12.1] the output looks like this: ``` TASK [debug] ************************************************************************************************************************* ok: [localhost] => (item=install) => { "msg": "install vm_name ocp4-pieter-lb" } ok: [localhost] => (item=lb) => { "msg": "lb vm_name ocp4-pieter-lb" } ``` I have also tried with the development branch: ``` (venv) [plewyllie@fedoravm ansible-issue]$ ansible --version [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible [core 2.13.0.dev0] (devel b984dd9c59) last updated 2022/01/07 16:00:28 (GMT +200) config file = None configured module search path = ['/home/plewyllie/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/plewyllie/ansible-dev/ansible/lib/ansible ansible collection location = /home/plewyllie/.ansible/collections:/usr/share/ansible/collections executable location = /home/plewyllie/ansible-dev/ansible/bin/ansible python version = 3.9.9 (main, Nov 19 2021, 00:00:00) [GCC 11.2.1 20210728 (Red Hat 11.2.1-1)] jinja version = 3.0.3 libyaml = True (venv) [plewyllie@fedoravm ansible-issue]$ ansible-playbook -i inventory main.yml [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. PLAY [localhost] ********************************************************************************************************* TASK [ansible.builtin.set_fact] ****************************************************************************************** ok: [localhost -> install] => (item=install) ok: [localhost -> lb] => (item=lb) TASK [debug] ************************************************************************************************************* ok: [localhost] => (item=install) => { "msg": "install vm_name ocp4-pieter-lb" } ok: [localhost] => (item=lb) => { "msg": "lb vm_name ocp4-pieter-lb" } PLAY RECAP *************************************************************************************************************** localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` ### Actual Results ```console TASK [debug] ************************************************************************************************************************* ok: [localhost] => (item=install) => { "msg": "install vm_name ocp4-pieter-lb" } ok: [localhost] => (item=lb) => { "msg": "lb vm_name ocp4-pieter-lb" } ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76676
https://github.com/ansible/ansible/pull/77008
56edbd2bbb372a61dae2017923f1d8e33d1922d9
c9d3518d2f3812787e1627806b5fa93f8fae48a6
2022-01-07T15:04:07Z
python
2022-02-11T23:19:38Z
test/integration/targets/delegate_to/delegate_facts_loop.yml
- hosts: localhost gather_facts: no tasks: - set_fact: test: 123 delegate_to: "{{ item }}" delegate_facts: true when: test is not defined loop: "{{ groups['all'] | difference(['localhost']) }}" - name: ensure we didnt create it on current host assert: that: - test is undefined - name: ensure facts get created assert: that: - "'test' in hostvars[item]" - hostvars[item]['test'] == 123 loop: "{{ groups['all'] | difference(['localhost'])}}"
closed
ansible/ansible
https://github.com/ansible/ansible
77,004
to_json filter sometimes turns values into strings when templating
### Summary It seems like the to_json filter behaves differently depending on: * Whether it's passed the result of a template call directly, or whether the result of the template call is placed into a fact first * Whether the template call actually does any substitutions or not * Whether I'm using ansible community <=v4.10.0 or >=5.0.1 When I run the playbook using Ansible 4.10.0, to_json turns the value into a JSON string, if and only if it's passed the result of a template call which does no substitutions. If the template has a variable in it, or if the result of the template call is stored in a fact before being sent to to_json, the JSON looks how I'd expect. When using Ansible 5.0.1, the value is turned into a string whenever to_json is passed the result of a template call, regardless of whether the template has a variable in it or not. There's a lot of permutations here, and I'm not sure which, if any, of these behaviors are unexpected. I've found https://github.com/ansible/ansible/issues/76443, which seems closely related, but: * I'm already using `template` and not `copy` as recommended [here](https://github.com/ansible/ansible/issues/76443#issuecomment-984889291) * If it's the case that `to_json and to_nice_json ALWAYS are supposed to return a string (of serialized JSON)` as mentioned [here](https://github.com/ansible/ansible/issues/76443#issuecomment-984879201), it seems that there are edge cases where it doesn't ### Issue Type Bug Report ### Component Name to_json ### Ansible Version ```console $ ansible --version ansible [core 2.12.1] config file = None configured module search path = ['/Users/riendeau/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/riendeau/venvs/ansible5x/lib/python3.9/site-packages/ansible ansible collection location = /Users/riendeau/.ansible/collections:/usr/share/ansible/collections executable location = /Users/riendeau/venvs/ansible5x/bin/ansible python version = 3.9.10 (v3.9.10:f2f3f53782, Jan 13 2022, 17:02:14) [Clang 6.0 (clang-600.0.57)] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed COLOR_DEBUG(env: ANSIBLE_COLOR_DEBUG) = bright gray ``` ### OS / Environment Mac OS Monterey 12.1 ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) --- - name: Test with trivial template hosts: localhost tasks: - copy: dest: "trivial.json.j2" content: '{ "hello": "world" }' - set_fact: template_result: "{{ lookup('template', 'trivial.json.j2') }}" - set_fact: json_from_fact_trivial_template: "{{ template_result | to_json }}" json_from_template_trivial_template: "{{ lookup('template', 'trivial.json.j2') | to_json }}" - debug: var: json_from_fact_trivial_template - debug: var: json_from_template_trivial_template - name: Test with template including variable hosts: localhost tasks: - copy: dest: "withvar.json.j2" content: '{% raw %}{ "{{ greeting }}": "world" }{% endraw %}' - set_fact: greeting: 'howdy' - set_fact: template_result: "{{ lookup('template', 'withvar.json.j2') }}" - set_fact: json_from_fact_template_withvar: "{{ template_result | to_json }}" json_from_template_withvar: "{{ lookup('template', 'withvar.json.j2') | to_json }}" - debug: var: json_from_fact_template_withvar - debug: var: json_from_template_withvar ``` ### Expected Results I expected: * Consistent behavior between Ansible 4.10.0 and Ansible 5.0.1, or documentation of a behavior change * Consistent behavior within Ansible 5.0.1 ### Actual Results ```console (Output with Ansible 4.10.0:) TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_fact_trivial_template": { "hello": "world" } } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_template_trivial_template": "\"{ \\\"hello\\\": \\\"world\\\" }\"" } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_fact_template_withvar": { "howdy": "world" } } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_template_withvar": { "howdy": "world" } } ------ (Output with Ansible 5.0.1:) TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_fact_trivial_template": { "hello": "world" } } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_template_trivial_template": "\"{ \\\"hello\\\": \\\"world\\\" }\"" } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_fact_template_withvar": { "howdy": "world" } } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_template_withvar": "\"{\\\"howdy\\\": \\\"world\\\"}\"" } ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77004
https://github.com/ansible/ansible/pull/77016
c9d3518d2f3812787e1627806b5fa93f8fae48a6
3779c1f278685c5a8d7f78942ce649f6805a5775
2022-02-10T17:48:08Z
python
2022-02-14T14:21:17Z
changelogs/fragments/77004-restore-missing-default.yml
closed
ansible/ansible
https://github.com/ansible/ansible
77,004
to_json filter sometimes turns values into strings when templating
### Summary It seems like the to_json filter behaves differently depending on: * Whether it's passed the result of a template call directly, or whether the result of the template call is placed into a fact first * Whether the template call actually does any substitutions or not * Whether I'm using ansible community <=v4.10.0 or >=5.0.1 When I run the playbook using Ansible 4.10.0, to_json turns the value into a JSON string, if and only if it's passed the result of a template call which does no substitutions. If the template has a variable in it, or if the result of the template call is stored in a fact before being sent to to_json, the JSON looks how I'd expect. When using Ansible 5.0.1, the value is turned into a string whenever to_json is passed the result of a template call, regardless of whether the template has a variable in it or not. There's a lot of permutations here, and I'm not sure which, if any, of these behaviors are unexpected. I've found https://github.com/ansible/ansible/issues/76443, which seems closely related, but: * I'm already using `template` and not `copy` as recommended [here](https://github.com/ansible/ansible/issues/76443#issuecomment-984889291) * If it's the case that `to_json and to_nice_json ALWAYS are supposed to return a string (of serialized JSON)` as mentioned [here](https://github.com/ansible/ansible/issues/76443#issuecomment-984879201), it seems that there are edge cases where it doesn't ### Issue Type Bug Report ### Component Name to_json ### Ansible Version ```console $ ansible --version ansible [core 2.12.1] config file = None configured module search path = ['/Users/riendeau/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/riendeau/venvs/ansible5x/lib/python3.9/site-packages/ansible ansible collection location = /Users/riendeau/.ansible/collections:/usr/share/ansible/collections executable location = /Users/riendeau/venvs/ansible5x/bin/ansible python version = 3.9.10 (v3.9.10:f2f3f53782, Jan 13 2022, 17:02:14) [Clang 6.0 (clang-600.0.57)] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed COLOR_DEBUG(env: ANSIBLE_COLOR_DEBUG) = bright gray ``` ### OS / Environment Mac OS Monterey 12.1 ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) --- - name: Test with trivial template hosts: localhost tasks: - copy: dest: "trivial.json.j2" content: '{ "hello": "world" }' - set_fact: template_result: "{{ lookup('template', 'trivial.json.j2') }}" - set_fact: json_from_fact_trivial_template: "{{ template_result | to_json }}" json_from_template_trivial_template: "{{ lookup('template', 'trivial.json.j2') | to_json }}" - debug: var: json_from_fact_trivial_template - debug: var: json_from_template_trivial_template - name: Test with template including variable hosts: localhost tasks: - copy: dest: "withvar.json.j2" content: '{% raw %}{ "{{ greeting }}": "world" }{% endraw %}' - set_fact: greeting: 'howdy' - set_fact: template_result: "{{ lookup('template', 'withvar.json.j2') }}" - set_fact: json_from_fact_template_withvar: "{{ template_result | to_json }}" json_from_template_withvar: "{{ lookup('template', 'withvar.json.j2') | to_json }}" - debug: var: json_from_fact_template_withvar - debug: var: json_from_template_withvar ``` ### Expected Results I expected: * Consistent behavior between Ansible 4.10.0 and Ansible 5.0.1, or documentation of a behavior change * Consistent behavior within Ansible 5.0.1 ### Actual Results ```console (Output with Ansible 4.10.0:) TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_fact_trivial_template": { "hello": "world" } } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_template_trivial_template": "\"{ \\\"hello\\\": \\\"world\\\" }\"" } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_fact_template_withvar": { "howdy": "world" } } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_template_withvar": { "howdy": "world" } } ------ (Output with Ansible 5.0.1:) TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_fact_trivial_template": { "hello": "world" } } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_template_trivial_template": "\"{ \\\"hello\\\": \\\"world\\\" }\"" } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_fact_template_withvar": { "howdy": "world" } } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_template_withvar": "\"{\\\"howdy\\\": \\\"world\\\"}\"" } ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77004
https://github.com/ansible/ansible/pull/77016
c9d3518d2f3812787e1627806b5fa93f8fae48a6
3779c1f278685c5a8d7f78942ce649f6805a5775
2022-02-10T17:48:08Z
python
2022-02-14T14:21:17Z
lib/ansible/plugins/lookup/template.py
# Copyright: (c) 2012, Michael DeHaan <[email protected]> # Copyright: (c) 2012-17, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type DOCUMENTATION = """ name: template author: Michael DeHaan version_added: "0.9" short_description: retrieve contents of file after templating with Jinja2 description: - Returns a list of strings; for each template in the list of templates you pass in, returns a string containing the results of processing that template. options: _terms: description: list of files to template convert_data: type: bool description: - Whether to convert YAML into data. If False, strings that are YAML will be left untouched. - Mutually exclusive with the jinja2_native option. variable_start_string: description: The string marking the beginning of a print statement. default: '{{' version_added: '2.8' type: str variable_end_string: description: The string marking the end of a print statement. default: '}}' version_added: '2.8' type: str jinja2_native: description: - Controls whether to use Jinja2 native types. - It is off by default even if global jinja2_native is True. - Has no effect if global jinja2_native is False. - This offers more flexibility than the template module which does not use Jinja2 native types at all. - Mutually exclusive with the convert_data option. default: False version_added: '2.11' type: bool template_vars: description: A dictionary, the keys become additional variables available for templating. default: {} version_added: '2.3' type: dict comment_start_string: description: The string marking the beginning of a comment statement. version_added: '2.12' type: str comment_end_string: description: The string marking the end of a comment statement. version_added: '2.12' type: str """ EXAMPLES = """ - name: show templating results ansible.builtin.debug: msg: "{{ lookup('ansible.builtin.template', './some_template.j2') }}" - name: show templating results with different variable start and end string ansible.builtin.debug: msg: "{{ lookup('ansible.builtin.template', './some_template.j2', variable_start_string='[%', variable_end_string='%]') }}" - name: show templating results with different comment start and end string ansible.builtin.debug: msg: "{{ lookup('ansible.builtin.template', './some_template.j2', comment_start_string='[#', comment_end_string='#]') }}" """ RETURN = """ _raw: description: file(s) content after templating type: list elements: raw """ from copy import deepcopy import os import ansible.constants as C from ansible.errors import AnsibleError from ansible.plugins.lookup import LookupBase from ansible.module_utils._text import to_bytes, to_text from ansible.template import generate_ansible_template_vars, AnsibleEnvironment from ansible.utils.display import Display from ansible.utils.native_jinja import NativeJinjaText display = Display() class LookupModule(LookupBase): def run(self, terms, variables, **kwargs): ret = [] self.set_options(var_options=variables, direct=kwargs) # capture options convert_data_p = self.get_option('convert_data') lookup_template_vars = self.get_option('template_vars') jinja2_native = self.get_option('jinja2_native') and C.DEFAULT_JINJA2_NATIVE variable_start_string = self.get_option('variable_start_string') variable_end_string = self.get_option('variable_end_string') comment_start_string = self.get_option('comment_start_string') comment_end_string = self.get_option('comment_end_string') if jinja2_native: templar = self._templar else: templar = self._templar.copy_with_new_env(environment_class=AnsibleEnvironment) for term in terms: display.debug("File lookup term: %s" % term) lookupfile = self.find_file_in_search_path(variables, 'templates', term) display.vvvv("File lookup using %s as file" % lookupfile) if lookupfile: b_template_data, show_data = self._loader._get_file_contents(lookupfile) template_data = to_text(b_template_data, errors='surrogate_or_strict') # set jinja2 internal search path for includes searchpath = variables.get('ansible_search_path', []) if searchpath: # our search paths aren't actually the proper ones for jinja includes. # We want to search into the 'templates' subdir of each search path in # addition to our original search paths. newsearchpath = [] for p in searchpath: newsearchpath.append(os.path.join(p, 'templates')) newsearchpath.append(p) searchpath = newsearchpath searchpath.insert(0, os.path.dirname(lookupfile)) # The template will have access to all existing variables, # plus some added by ansible (e.g., template_{path,mtime}), # plus anything passed to the lookup with the template_vars= # argument. vars = deepcopy(variables) vars.update(generate_ansible_template_vars(term, lookupfile)) vars.update(lookup_template_vars) with templar.set_temporary_context(variable_start_string=variable_start_string, variable_end_string=variable_end_string, comment_start_string=comment_start_string, comment_end_string=comment_end_string, available_variables=vars, searchpath=searchpath): res = templar.template(template_data, preserve_trailing_newlines=True, convert_data=convert_data_p, escape_backslashes=False) if C.DEFAULT_JINJA2_NATIVE and not jinja2_native: # jinja2_native is true globally but off for the lookup, we need this text # not to be processed by literal_eval anywhere in Ansible res = NativeJinjaText(res) ret.append(res) else: raise AnsibleError("the template file %s could not be found for the lookup" % term) return ret
closed
ansible/ansible
https://github.com/ansible/ansible
77,004
to_json filter sometimes turns values into strings when templating
### Summary It seems like the to_json filter behaves differently depending on: * Whether it's passed the result of a template call directly, or whether the result of the template call is placed into a fact first * Whether the template call actually does any substitutions or not * Whether I'm using ansible community <=v4.10.0 or >=5.0.1 When I run the playbook using Ansible 4.10.0, to_json turns the value into a JSON string, if and only if it's passed the result of a template call which does no substitutions. If the template has a variable in it, or if the result of the template call is stored in a fact before being sent to to_json, the JSON looks how I'd expect. When using Ansible 5.0.1, the value is turned into a string whenever to_json is passed the result of a template call, regardless of whether the template has a variable in it or not. There's a lot of permutations here, and I'm not sure which, if any, of these behaviors are unexpected. I've found https://github.com/ansible/ansible/issues/76443, which seems closely related, but: * I'm already using `template` and not `copy` as recommended [here](https://github.com/ansible/ansible/issues/76443#issuecomment-984889291) * If it's the case that `to_json and to_nice_json ALWAYS are supposed to return a string (of serialized JSON)` as mentioned [here](https://github.com/ansible/ansible/issues/76443#issuecomment-984879201), it seems that there are edge cases where it doesn't ### Issue Type Bug Report ### Component Name to_json ### Ansible Version ```console $ ansible --version ansible [core 2.12.1] config file = None configured module search path = ['/Users/riendeau/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/riendeau/venvs/ansible5x/lib/python3.9/site-packages/ansible ansible collection location = /Users/riendeau/.ansible/collections:/usr/share/ansible/collections executable location = /Users/riendeau/venvs/ansible5x/bin/ansible python version = 3.9.10 (v3.9.10:f2f3f53782, Jan 13 2022, 17:02:14) [Clang 6.0 (clang-600.0.57)] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed COLOR_DEBUG(env: ANSIBLE_COLOR_DEBUG) = bright gray ``` ### OS / Environment Mac OS Monterey 12.1 ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) --- - name: Test with trivial template hosts: localhost tasks: - copy: dest: "trivial.json.j2" content: '{ "hello": "world" }' - set_fact: template_result: "{{ lookup('template', 'trivial.json.j2') }}" - set_fact: json_from_fact_trivial_template: "{{ template_result | to_json }}" json_from_template_trivial_template: "{{ lookup('template', 'trivial.json.j2') | to_json }}" - debug: var: json_from_fact_trivial_template - debug: var: json_from_template_trivial_template - name: Test with template including variable hosts: localhost tasks: - copy: dest: "withvar.json.j2" content: '{% raw %}{ "{{ greeting }}": "world" }{% endraw %}' - set_fact: greeting: 'howdy' - set_fact: template_result: "{{ lookup('template', 'withvar.json.j2') }}" - set_fact: json_from_fact_template_withvar: "{{ template_result | to_json }}" json_from_template_withvar: "{{ lookup('template', 'withvar.json.j2') | to_json }}" - debug: var: json_from_fact_template_withvar - debug: var: json_from_template_withvar ``` ### Expected Results I expected: * Consistent behavior between Ansible 4.10.0 and Ansible 5.0.1, or documentation of a behavior change * Consistent behavior within Ansible 5.0.1 ### Actual Results ```console (Output with Ansible 4.10.0:) TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_fact_trivial_template": { "hello": "world" } } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_template_trivial_template": "\"{ \\\"hello\\\": \\\"world\\\" }\"" } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_fact_template_withvar": { "howdy": "world" } } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_template_withvar": { "howdy": "world" } } ------ (Output with Ansible 5.0.1:) TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_fact_trivial_template": { "hello": "world" } } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_template_trivial_template": "\"{ \\\"hello\\\": \\\"world\\\" }\"" } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_fact_template_withvar": { "howdy": "world" } } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_template_withvar": "\"{\\\"howdy\\\": \\\"world\\\"}\"" } ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77004
https://github.com/ansible/ansible/pull/77016
c9d3518d2f3812787e1627806b5fa93f8fae48a6
3779c1f278685c5a8d7f78942ce649f6805a5775
2022-02-10T17:48:08Z
python
2022-02-14T14:21:17Z
test/integration/targets/lookup_template/tasks/main.yml
# ref #18526 - name: Test that we have a proper jinja search path in template lookup set_fact: hello_world: "{{ lookup('template', 'hello.txt') }}" - assert: that: - "hello_world|trim == 'Hello world!'" - name: Test that we have a proper jinja search path in template lookup with different variable start and end string vars: my_var: world set_fact: hello_world_string: "{{ lookup('template', 'hello_string.txt', variable_start_string='[%', variable_end_string='%]') }}" - assert: that: - "hello_world_string|trim == 'Hello world!'" - name: Test that we have a proper jinja search path in template lookup with different comment start and end string set_fact: hello_world_comment: "{{ lookup('template', 'hello_comment.txt', comment_start_string='[#', comment_end_string='#]') }}" - assert: that: - "hello_world_comment|trim == 'Hello world!'"
closed
ansible/ansible
https://github.com/ansible/ansible
77,004
to_json filter sometimes turns values into strings when templating
### Summary It seems like the to_json filter behaves differently depending on: * Whether it's passed the result of a template call directly, or whether the result of the template call is placed into a fact first * Whether the template call actually does any substitutions or not * Whether I'm using ansible community <=v4.10.0 or >=5.0.1 When I run the playbook using Ansible 4.10.0, to_json turns the value into a JSON string, if and only if it's passed the result of a template call which does no substitutions. If the template has a variable in it, or if the result of the template call is stored in a fact before being sent to to_json, the JSON looks how I'd expect. When using Ansible 5.0.1, the value is turned into a string whenever to_json is passed the result of a template call, regardless of whether the template has a variable in it or not. There's a lot of permutations here, and I'm not sure which, if any, of these behaviors are unexpected. I've found https://github.com/ansible/ansible/issues/76443, which seems closely related, but: * I'm already using `template` and not `copy` as recommended [here](https://github.com/ansible/ansible/issues/76443#issuecomment-984889291) * If it's the case that `to_json and to_nice_json ALWAYS are supposed to return a string (of serialized JSON)` as mentioned [here](https://github.com/ansible/ansible/issues/76443#issuecomment-984879201), it seems that there are edge cases where it doesn't ### Issue Type Bug Report ### Component Name to_json ### Ansible Version ```console $ ansible --version ansible [core 2.12.1] config file = None configured module search path = ['/Users/riendeau/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/riendeau/venvs/ansible5x/lib/python3.9/site-packages/ansible ansible collection location = /Users/riendeau/.ansible/collections:/usr/share/ansible/collections executable location = /Users/riendeau/venvs/ansible5x/bin/ansible python version = 3.9.10 (v3.9.10:f2f3f53782, Jan 13 2022, 17:02:14) [Clang 6.0 (clang-600.0.57)] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed COLOR_DEBUG(env: ANSIBLE_COLOR_DEBUG) = bright gray ``` ### OS / Environment Mac OS Monterey 12.1 ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) --- - name: Test with trivial template hosts: localhost tasks: - copy: dest: "trivial.json.j2" content: '{ "hello": "world" }' - set_fact: template_result: "{{ lookup('template', 'trivial.json.j2') }}" - set_fact: json_from_fact_trivial_template: "{{ template_result | to_json }}" json_from_template_trivial_template: "{{ lookup('template', 'trivial.json.j2') | to_json }}" - debug: var: json_from_fact_trivial_template - debug: var: json_from_template_trivial_template - name: Test with template including variable hosts: localhost tasks: - copy: dest: "withvar.json.j2" content: '{% raw %}{ "{{ greeting }}": "world" }{% endraw %}' - set_fact: greeting: 'howdy' - set_fact: template_result: "{{ lookup('template', 'withvar.json.j2') }}" - set_fact: json_from_fact_template_withvar: "{{ template_result | to_json }}" json_from_template_withvar: "{{ lookup('template', 'withvar.json.j2') | to_json }}" - debug: var: json_from_fact_template_withvar - debug: var: json_from_template_withvar ``` ### Expected Results I expected: * Consistent behavior between Ansible 4.10.0 and Ansible 5.0.1, or documentation of a behavior change * Consistent behavior within Ansible 5.0.1 ### Actual Results ```console (Output with Ansible 4.10.0:) TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_fact_trivial_template": { "hello": "world" } } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_template_trivial_template": "\"{ \\\"hello\\\": \\\"world\\\" }\"" } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_fact_template_withvar": { "howdy": "world" } } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_template_withvar": { "howdy": "world" } } ------ (Output with Ansible 5.0.1:) TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_fact_trivial_template": { "hello": "world" } } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_template_trivial_template": "\"{ \\\"hello\\\": \\\"world\\\" }\"" } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_fact_template_withvar": { "howdy": "world" } } TASK [debug] **************************************************************************************************************************************************************** ok: [localhost] => { "json_from_template_withvar": "\"{\\\"howdy\\\": \\\"world\\\"}\"" } ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77004
https://github.com/ansible/ansible/pull/77016
c9d3518d2f3812787e1627806b5fa93f8fae48a6
3779c1f278685c5a8d7f78942ce649f6805a5775
2022-02-10T17:48:08Z
python
2022-02-14T14:21:17Z
test/integration/targets/lookup_template/templates/dict.j2
closed
ansible/ansible
https://github.com/ansible/ansible
77,010
dnf module fails on Fedora Rawhide
### Summary An error occurs using this Ansible code: ```yaml - name: install docker ansible.builtin.package: name: moby-engine state: present ``` The [error](https://gitlab.com/robertdebock/ansible-role-docker/-/jobs/2078222255#L279): ```text An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: in method 'ConfigParser_setSubstitutions', argument 2 of type 'std::map< std::string,std::string,std::less< std::string >,std::allocator< std::pair< std::string const,std::string > > > const &' ``` The full error: ```text fatal: [docker-fedora-rawhide]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1644500487.7706077-1096-217210945615717/AnsiballZ_dnf.py\", line 107, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1644500487.7706077-1096-217210945615717/AnsiballZ_dnf.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1644500487.7706077-1096-217210945615717/AnsiballZ_dnf.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible.modules.dnf', init_globals=dict(_module_fqn='ansible.modules.dnf', _modlib_path=modlib_path),\n File \"/usr/lib64/python3.10/runpy.py\", line 209, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib64/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_ansible.legacy.dnf_payload_vdrv7w8l/ansible_ansible.legacy.dnf_payload.zip/ansible/modules/dnf.py\", line 1427, in <module>\n File \"/tmp/ansible_ansible.legacy.dnf_payload_vdrv7w8l/ansible_ansible.legacy.dnf_payload.zip/ansible/modules/dnf.py\", line 1416, in main\n File \"/tmp/ansible_ansible.legacy.dnf_payload_vdrv7w8l/ansible_ansible.legacy.dnf_payload.zip/ansible/modules/dnf.py\", line 1382, in run\n File \"/tmp/ansible_ansible.legacy.dnf_payload_vdrv7w8l/ansible_ansible.legacy.dnf_payload.zip/ansible/modules/dnf.py\", line 704, in _base\n File \"/tmp/ansible_ansible.legacy.dnf_payload_vdrv7w8l/ansible_ansible.legacy.dnf_payload.zip/ansible/modules/dnf.py\", line 674, in _specify_repositories\n File \"/usr/lib/python3.10/site-packages/dnf/base.py\", line 545, in read_all_repos\n for repo in reader:\n File \"/usr/lib/python3.10/site-packages/dnf/conf/read.py\", line 42, in __iter__\n for r in self._get_repos(self.conf.config_file_path):\n File \"/usr/lib/python3.10/site-packages/dnf/conf/read.py\", line 109, in _get_repos\n parser.setSubstitutions(substs)\n File \"/usr/lib64/python3.10/site-packages/libdnf/conf.py\", line 1670, in setSubstitutions\n return _conf.ConfigParser_setSubstitutions(self, substitutions)\nTypeError: in method 'ConfigParser_setSubstitutions', argument 2 of type 'std::map< std::string,std::string,std::less< std::string >,std::allocator< std::pair< std::string const,std::string > > > const &'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} ``` It happens on Fedora Rawhide only, not Fedora 35. ### Issue Type Bug Report ### Component Name package ### Ansible Version ```console $ ansible --version ansible [core 2.12.2] config file = /Users/robertdb/Documents/git.adfinis.com/juniper/ansible-playbook-g itlab/ansible.cfg configured module search path = ['/Users/robertdb/.ansible/plugins/modules', '/usr /share/ansible/plugins/modules'] ansible python module location = /opt/homebrew/lib/python3.9/site-packages/ansible ansible collection location = /Users/robertdb/Documents/git.adfinis.com/juniper/an sible-playbook-gitlab/collections executable location = /opt/homebrew/bin/ansible python version = 3.9.2 (default, Mar 26 2021, 15:28:17) [Clang 12.0.0 (clang-1200. 0.32.29)] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed # (no output) ``` ### OS / Environment Ansible Controller: Mac OS X Ansible managed node: Fedora Rawhide (currently "36") ### Steps to Reproduce ```yaml - name: install docker ansible.builtin.package: name: moby-engine state: present ``` ### Expected Results I was hoping the package to be installed. ### Actual Results ```console An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: in method 'ConfigParser_setSubstitutions', argument 2 of type 'std::map< std::string,std::string,std::less< std::string >,std::allocator< std::pair< std::string const,std::string > > > const &' ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77010
https://github.com/ansible/ansible/pull/77024
af7c9deb4ed4c6b9a526a40f976bb52c570fced1
18251f368511d1eaa161380517c29f6d7839d229
2022-02-11T11:59:02Z
python
2022-02-15T15:12:47Z
changelogs/fragments/77010-dnf-ensure-releasever-string.yml
closed
ansible/ansible
https://github.com/ansible/ansible
77,010
dnf module fails on Fedora Rawhide
### Summary An error occurs using this Ansible code: ```yaml - name: install docker ansible.builtin.package: name: moby-engine state: present ``` The [error](https://gitlab.com/robertdebock/ansible-role-docker/-/jobs/2078222255#L279): ```text An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: in method 'ConfigParser_setSubstitutions', argument 2 of type 'std::map< std::string,std::string,std::less< std::string >,std::allocator< std::pair< std::string const,std::string > > > const &' ``` The full error: ```text fatal: [docker-fedora-rawhide]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1644500487.7706077-1096-217210945615717/AnsiballZ_dnf.py\", line 107, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1644500487.7706077-1096-217210945615717/AnsiballZ_dnf.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1644500487.7706077-1096-217210945615717/AnsiballZ_dnf.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible.modules.dnf', init_globals=dict(_module_fqn='ansible.modules.dnf', _modlib_path=modlib_path),\n File \"/usr/lib64/python3.10/runpy.py\", line 209, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib64/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_ansible.legacy.dnf_payload_vdrv7w8l/ansible_ansible.legacy.dnf_payload.zip/ansible/modules/dnf.py\", line 1427, in <module>\n File \"/tmp/ansible_ansible.legacy.dnf_payload_vdrv7w8l/ansible_ansible.legacy.dnf_payload.zip/ansible/modules/dnf.py\", line 1416, in main\n File \"/tmp/ansible_ansible.legacy.dnf_payload_vdrv7w8l/ansible_ansible.legacy.dnf_payload.zip/ansible/modules/dnf.py\", line 1382, in run\n File \"/tmp/ansible_ansible.legacy.dnf_payload_vdrv7w8l/ansible_ansible.legacy.dnf_payload.zip/ansible/modules/dnf.py\", line 704, in _base\n File \"/tmp/ansible_ansible.legacy.dnf_payload_vdrv7w8l/ansible_ansible.legacy.dnf_payload.zip/ansible/modules/dnf.py\", line 674, in _specify_repositories\n File \"/usr/lib/python3.10/site-packages/dnf/base.py\", line 545, in read_all_repos\n for repo in reader:\n File \"/usr/lib/python3.10/site-packages/dnf/conf/read.py\", line 42, in __iter__\n for r in self._get_repos(self.conf.config_file_path):\n File \"/usr/lib/python3.10/site-packages/dnf/conf/read.py\", line 109, in _get_repos\n parser.setSubstitutions(substs)\n File \"/usr/lib64/python3.10/site-packages/libdnf/conf.py\", line 1670, in setSubstitutions\n return _conf.ConfigParser_setSubstitutions(self, substitutions)\nTypeError: in method 'ConfigParser_setSubstitutions', argument 2 of type 'std::map< std::string,std::string,std::less< std::string >,std::allocator< std::pair< std::string const,std::string > > > const &'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} ``` It happens on Fedora Rawhide only, not Fedora 35. ### Issue Type Bug Report ### Component Name package ### Ansible Version ```console $ ansible --version ansible [core 2.12.2] config file = /Users/robertdb/Documents/git.adfinis.com/juniper/ansible-playbook-g itlab/ansible.cfg configured module search path = ['/Users/robertdb/.ansible/plugins/modules', '/usr /share/ansible/plugins/modules'] ansible python module location = /opt/homebrew/lib/python3.9/site-packages/ansible ansible collection location = /Users/robertdb/Documents/git.adfinis.com/juniper/an sible-playbook-gitlab/collections executable location = /opt/homebrew/bin/ansible python version = 3.9.2 (default, Mar 26 2021, 15:28:17) [Clang 12.0.0 (clang-1200. 0.32.29)] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed # (no output) ``` ### OS / Environment Ansible Controller: Mac OS X Ansible managed node: Fedora Rawhide (currently "36") ### Steps to Reproduce ```yaml - name: install docker ansible.builtin.package: name: moby-engine state: present ``` ### Expected Results I was hoping the package to be installed. ### Actual Results ```console An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: in method 'ConfigParser_setSubstitutions', argument 2 of type 'std::map< std::string,std::string,std::less< std::string >,std::allocator< std::pair< std::string const,std::string > > > const &' ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77010
https://github.com/ansible/ansible/pull/77024
af7c9deb4ed4c6b9a526a40f976bb52c570fced1
18251f368511d1eaa161380517c29f6d7839d229
2022-02-11T11:59:02Z
python
2022-02-15T15:12:47Z
lib/ansible/modules/dnf.py
# -*- coding: utf-8 -*- # Copyright 2015 Cristian van Ee <cristian at cvee.org> # Copyright 2015 Igor Gnatenko <[email protected]> # Copyright 2018 Adam Miller <[email protected]> # # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type DOCUMENTATION = ''' --- module: dnf version_added: 1.9 short_description: Manages packages with the I(dnf) package manager description: - Installs, upgrade, removes, and lists packages and groups with the I(dnf) package manager. options: name: description: - "A package name or package specifier with version, like C(name-1.0). When using state=latest, this can be '*' which means run: dnf -y update. You can also pass a url or a local path to a rpm file. To operate on several packages this can accept a comma separated string of packages or a list of packages." - Comparison operators for package version are valid here C(>), C(<), C(>=), C(<=). Example - C(name>=1.0) - You can also pass an absolute path for a binary which is provided by the package to install. See examples for more information. required: true aliases: - pkg type: list elements: str list: description: - Various (non-idempotent) commands for usage with C(/usr/bin/ansible) and I(not) playbooks. See examples. type: str state: description: - Whether to install (C(present), C(latest)), or remove (C(absent)) a package. - Default is C(None), however in effect the default action is C(present) unless the C(autoremove) option is enabled for this module, then C(absent) is inferred. choices: ['absent', 'present', 'installed', 'removed', 'latest'] type: str enablerepo: description: - I(Repoid) of repositories to enable for the install/update operation. These repos will not persist beyond the transaction. When specifying multiple repos, separate them with a ",". type: list elements: str disablerepo: description: - I(Repoid) of repositories to disable for the install/update operation. These repos will not persist beyond the transaction. When specifying multiple repos, separate them with a ",". type: list elements: str conf_file: description: - The remote dnf configuration file to use for the transaction. type: str disable_gpg_check: description: - Whether to disable the GPG checking of signatures of packages being installed. Has an effect only if state is I(present) or I(latest). - This setting affects packages installed from a repository as well as "local" packages installed from the filesystem or a URL. type: bool default: 'no' installroot: description: - Specifies an alternative installroot, relative to which all packages will be installed. version_added: "2.3" default: "/" type: str releasever: description: - Specifies an alternative release from which all packages will be installed. version_added: "2.6" type: str autoremove: description: - If C(yes), removes all "leaf" packages from the system that were originally installed as dependencies of user-installed packages but which are no longer required by any such package. Should be used alone or when state is I(absent) type: bool default: "no" version_added: "2.4" exclude: description: - Package name(s) to exclude when state=present, or latest. This can be a list or a comma separated string. version_added: "2.7" type: list elements: str skip_broken: description: - Skip all unavailable packages or packages with broken dependencies without raising an error. Equivalent to passing the --skip-broken option. type: bool default: "no" version_added: "2.7" update_cache: description: - Force dnf to check if cache is out of date and redownload if needed. Has an effect only if state is I(present) or I(latest). type: bool default: "no" aliases: [ expire-cache ] version_added: "2.7" update_only: description: - When using latest, only update installed packages. Do not install packages. - Has an effect only if state is I(latest) default: "no" type: bool version_added: "2.7" security: description: - If set to C(yes), and C(state=latest) then only installs updates that have been marked security related. - Note that, similar to C(dnf upgrade-minimal), this filter applies to dependencies as well. type: bool default: "no" version_added: "2.7" bugfix: description: - If set to C(yes), and C(state=latest) then only installs updates that have been marked bugfix related. - Note that, similar to C(dnf upgrade-minimal), this filter applies to dependencies as well. default: "no" type: bool version_added: "2.7" enable_plugin: description: - I(Plugin) name to enable for the install/update operation. The enabled plugin will not persist beyond the transaction. version_added: "2.7" type: list elements: str disable_plugin: description: - I(Plugin) name to disable for the install/update operation. The disabled plugins will not persist beyond the transaction. version_added: "2.7" type: list elements: str disable_excludes: description: - Disable the excludes defined in DNF config files. - If set to C(all), disables all excludes. - If set to C(main), disable excludes defined in [main] in dnf.conf. - If set to C(repoid), disable excludes defined for given repo id. version_added: "2.7" type: str validate_certs: description: - This only applies if using a https url as the source of the rpm. e.g. for localinstall. If set to C(no), the SSL certificates will not be validated. - This should only set to C(no) used on personally controlled sites using self-signed certificates as it avoids verifying the source site. type: bool default: "yes" version_added: "2.7" sslverify: description: - Disables SSL validation of the repository server for this transaction. - This should be set to C(no) if one of the configured repositories is using an untrusted or self-signed certificate. type: bool default: "yes" version_added: "2.13" allow_downgrade: description: - Specify if the named package and version is allowed to downgrade a maybe already installed higher version of that package. Note that setting allow_downgrade=True can make this module behave in a non-idempotent way. The task could end up with a set of packages that does not match the complete list of specified packages to install (because dependencies between the downgraded package and others can cause changes to the packages which were in the earlier transaction). type: bool default: "no" version_added: "2.7" install_repoquery: description: - This is effectively a no-op in DNF as it is not needed with DNF, but is an accepted parameter for feature parity/compatibility with the I(yum) module. type: bool default: "yes" version_added: "2.7" download_only: description: - Only download the packages, do not install them. default: "no" type: bool version_added: "2.7" lock_timeout: description: - Amount of time to wait for the dnf lockfile to be freed. required: false default: 30 type: int version_added: "2.8" install_weak_deps: description: - Will also install all packages linked by a weak dependency relation. type: bool default: "yes" version_added: "2.8" download_dir: description: - Specifies an alternate directory to store packages. - Has an effect only if I(download_only) is specified. type: str version_added: "2.8" allowerasing: description: - If C(yes) it allows erasing of installed packages to resolve dependencies. required: false type: bool default: "no" version_added: "2.10" nobest: description: - Set best option to False, so that transactions are not limited to best candidates only. required: false type: bool default: "no" version_added: "2.11" cacheonly: description: - Tells dnf to run entirely from system cache; does not download or update metadata. type: bool default: "no" version_added: "2.12" extends_documentation_fragment: - action_common_attributes - action_common_attributes.flow attributes: action: details: In the case of dnf, it has 2 action plugins that use it under the hood, M(ansible.builtin.yum) and M(ansible.builtin.package). support: partial async: support: none bypass_host_loop: support: none check_mode: support: full diff_mode: support: full platform: platforms: rhel notes: - When used with a C(loop:) each package will be processed individually, it is much more efficient to pass the list directly to the I(name) option. - Group removal doesn't work if the group was installed with Ansible because upstream dnf's API doesn't properly mark groups as installed, therefore upon removal the module is unable to detect that the group is installed (https://bugzilla.redhat.com/show_bug.cgi?id=1620324) requirements: - "python >= 2.6" - python-dnf - for the autoremove option you need dnf >= 2.0.1" author: - Igor Gnatenko (@ignatenkobrain) <[email protected]> - Cristian van Ee (@DJMuggs) <cristian at cvee.org> - Berend De Schouwer (@berenddeschouwer) - Adam Miller (@maxamillion) <[email protected]> ''' EXAMPLES = ''' - name: Install the latest version of Apache ansible.builtin.dnf: name: httpd state: latest - name: Install Apache >= 2.4 ansible.builtin.dnf: name: httpd>=2.4 state: present - name: Install the latest version of Apache and MariaDB ansible.builtin.dnf: name: - httpd - mariadb-server state: latest - name: Remove the Apache package ansible.builtin.dnf: name: httpd state: absent - name: Install the latest version of Apache from the testing repo ansible.builtin.dnf: name: httpd enablerepo: testing state: present - name: Upgrade all packages ansible.builtin.dnf: name: "*" state: latest - name: Install the nginx rpm from a remote repo ansible.builtin.dnf: name: 'http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm' state: present - name: Install nginx rpm from a local file ansible.builtin.dnf: name: /usr/local/src/nginx-release-centos-6-0.el6.ngx.noarch.rpm state: present - name: Install Package based upon the file it provides ansible.builtin.dnf: name: /usr/bin/cowsay state: present - name: Install the 'Development tools' package group ansible.builtin.dnf: name: '@Development tools' state: present - name: Autoremove unneeded packages installed as dependencies ansible.builtin.dnf: autoremove: yes - name: Uninstall httpd but keep its dependencies ansible.builtin.dnf: name: httpd state: absent autoremove: no - name: Install a modularity appstream with defined stream and profile ansible.builtin.dnf: name: '@postgresql:9.6/client' state: present - name: Install a modularity appstream with defined stream ansible.builtin.dnf: name: '@postgresql:9.6' state: present - name: Install a modularity appstream with defined profile ansible.builtin.dnf: name: '@postgresql/client' state: present ''' import os import re import sys from ansible.module_utils._text import to_native, to_text from ansible.module_utils.urls import fetch_file from ansible.module_utils.six import PY2, text_type from ansible.module_utils.compat.version import LooseVersion from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.common.locale import get_best_parsable_locale from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module from ansible.module_utils.yumdnf import YumDnf, yumdnf_argument_spec # NOTE dnf Python bindings import is postponed, see DnfModule._ensure_dnf(), # because we need AnsibleModule object to use get_best_parsable_locale() # to set proper locale before importing dnf to be able to scrape # the output in some cases (FIXME?). dnf = None class DnfModule(YumDnf): """ DNF Ansible module back-end implementation """ def __init__(self, module): # This populates instance vars for all argument spec params super(DnfModule, self).__init__(module) self._ensure_dnf() self.lockfile = "/var/cache/dnf/*_lock.pid" self.pkg_mgr_name = "dnf" try: self.with_modules = dnf.base.WITH_MODULES except AttributeError: self.with_modules = False # DNF specific args that are not part of YumDnf self.allowerasing = self.module.params['allowerasing'] self.nobest = self.module.params['nobest'] def is_lockfile_pid_valid(self): # FIXME? it looks like DNF takes care of invalid lock files itself? # https://github.com/ansible/ansible/issues/57189 return True def _sanitize_dnf_error_msg_install(self, spec, error): """ For unhandled dnf.exceptions.Error scenarios, there are certain error messages we want to filter in an install scenario. Do that here. """ if ( to_text("no package matched") in to_text(error) or to_text("No match for argument:") in to_text(error) ): return "No package {0} available.".format(spec) return error def _sanitize_dnf_error_msg_remove(self, spec, error): """ For unhandled dnf.exceptions.Error scenarios, there are certain error messages we want to ignore in a removal scenario as known benign failures. Do that here. """ if ( 'no package matched' in to_native(error) or 'No match for argument:' in to_native(error) ): return (False, "{0} is not installed".format(spec)) # Return value is tuple of: # ("Is this actually a failure?", "Error Message") return (True, error) def _package_dict(self, package): """Return a dictionary of information for the package.""" # NOTE: This no longer contains the 'dnfstate' field because it is # already known based on the query type. result = { 'name': package.name, 'arch': package.arch, 'epoch': str(package.epoch), 'release': package.release, 'version': package.version, 'repo': package.repoid} # envra format for alignment with the yum module result['envra'] = '{epoch}:{name}-{version}-{release}.{arch}'.format(**result) # keep nevra key for backwards compat as it was previously # defined with a value in envra format result['nevra'] = result['envra'] if package.installtime == 0: result['yumstate'] = 'available' else: result['yumstate'] = 'installed' return result def _split_package_arch(self, packagename): # This list was auto generated on a Fedora 28 system with the following one-liner # printf '[ '; for arch in $(ls /usr/lib/rpm/platform); do printf '"%s", ' ${arch%-linux}; done; printf ']\n' redhat_rpm_arches = [ "aarch64", "alphaev56", "alphaev5", "alphaev67", "alphaev6", "alpha", "alphapca56", "amd64", "armv3l", "armv4b", "armv4l", "armv5tejl", "armv5tel", "armv5tl", "armv6hl", "armv6l", "armv7hl", "armv7hnl", "armv7l", "athlon", "geode", "i386", "i486", "i586", "i686", "ia32e", "ia64", "m68k", "mips64el", "mips64", "mips64r6el", "mips64r6", "mipsel", "mips", "mipsr6el", "mipsr6", "noarch", "pentium3", "pentium4", "ppc32dy4", "ppc64iseries", "ppc64le", "ppc64", "ppc64p7", "ppc64pseries", "ppc8260", "ppc8560", "ppciseries", "ppc", "ppcpseries", "riscv64", "s390", "s390x", "sh3", "sh4a", "sh4", "sh", "sparc64", "sparc64v", "sparc", "sparcv8", "sparcv9", "sparcv9v", "x86_64" ] name, delimiter, arch = packagename.rpartition('.') if name and arch and arch in redhat_rpm_arches: return name, arch return packagename, None def _packagename_dict(self, packagename): """ Return a dictionary of information for a package name string or None if the package name doesn't contain at least all NVR elements """ if packagename[-4:] == '.rpm': packagename = packagename[:-4] rpm_nevr_re = re.compile(r'(\S+)-(?:(\d*):)?(.*)-(~?\w+[\w.+]*)') try: arch = None nevr, arch = self._split_package_arch(packagename) if arch: packagename = nevr rpm_nevr_match = rpm_nevr_re.match(packagename) if rpm_nevr_match: name, epoch, version, release = rpm_nevr_re.match(packagename).groups() if not version or not version.split('.')[0].isdigit(): return None else: return None except AttributeError as e: self.module.fail_json( msg='Error attempting to parse package: %s, %s' % (packagename, to_native(e)), rc=1, results=[] ) if not epoch: epoch = "0" if ':' in name: epoch_name = name.split(":") epoch = epoch_name[0] name = ''.join(epoch_name[1:]) result = { 'name': name, 'epoch': epoch, 'release': release, 'version': version, } return result # Original implementation from yum.rpmUtils.miscutils (GPLv2+) # http://yum.baseurl.org/gitweb?p=yum.git;a=blob;f=rpmUtils/miscutils.py def _compare_evr(self, e1, v1, r1, e2, v2, r2): # return 1: a is newer than b # 0: a and b are the same version # -1: b is newer than a if e1 is None: e1 = '0' else: e1 = str(e1) v1 = str(v1) r1 = str(r1) if e2 is None: e2 = '0' else: e2 = str(e2) v2 = str(v2) r2 = str(r2) # print '%s, %s, %s vs %s, %s, %s' % (e1, v1, r1, e2, v2, r2) rc = dnf.rpm.rpm.labelCompare((e1, v1, r1), (e2, v2, r2)) # print '%s, %s, %s vs %s, %s, %s = %s' % (e1, v1, r1, e2, v2, r2, rc) return rc def _ensure_dnf(self): locale = get_best_parsable_locale(self.module) os.environ['LC_ALL'] = os.environ['LC_MESSAGES'] = os.environ['LANG'] = locale global dnf try: import dnf import dnf.cli import dnf.const import dnf.exceptions import dnf.subject import dnf.util HAS_DNF = True except ImportError: HAS_DNF = False if HAS_DNF: return system_interpreters = ['/usr/libexec/platform-python', '/usr/bin/python3', '/usr/bin/python2', '/usr/bin/python'] if not has_respawned(): # probe well-known system Python locations for accessible bindings, favoring py3 interpreter = probe_interpreters_for_module(system_interpreters, 'dnf') if interpreter: # respawn under the interpreter where the bindings should be found respawn_module(interpreter) # end of the line for this module, the process will exit here once the respawned module completes # done all we can do, something is just broken (auto-install isn't useful anymore with respawn, so it was removed) self.module.fail_json( msg="Could not import the dnf python module using {0} ({1}). " "Please install `python3-dnf` or `python2-dnf` package or ensure you have specified the " "correct ansible_python_interpreter. (attempted {2})" .format(sys.executable, sys.version.replace('\n', ''), system_interpreters), results=[] ) def _configure_base(self, base, conf_file, disable_gpg_check, installroot='/', sslverify=True): """Configure the dnf Base object.""" conf = base.conf # Change the configuration file path if provided, this must be done before conf.read() is called if conf_file: # Fail if we can't read the configuration file. if not os.access(conf_file, os.R_OK): self.module.fail_json( msg="cannot read configuration file", conf_file=conf_file, results=[], ) else: conf.config_file_path = conf_file # Read the configuration file conf.read() # Turn off debug messages in the output conf.debuglevel = 0 # Set whether to check gpg signatures conf.gpgcheck = not disable_gpg_check conf.localpkg_gpgcheck = not disable_gpg_check # Don't prompt for user confirmations conf.assumeyes = True # Set certificate validation conf.sslverify = sslverify # Set installroot conf.installroot = installroot # Load substitutions from the filesystem conf.substitutions.update_from_etc(installroot) # Handle different DNF versions immutable mutable datatypes and # dnf v1/v2/v3 # # In DNF < 3.0 are lists, and modifying them works # In DNF >= 3.0 < 3.6 are lists, but modifying them doesn't work # In DNF >= 3.6 have been turned into tuples, to communicate that modifying them doesn't work # # https://www.happyassassin.net/2018/06/27/adams-debugging-adventures-the-immutable-mutable-object/ # # Set excludes if self.exclude: _excludes = list(conf.exclude) _excludes.extend(self.exclude) conf.exclude = _excludes # Set disable_excludes if self.disable_excludes: _disable_excludes = list(conf.disable_excludes) if self.disable_excludes not in _disable_excludes: _disable_excludes.append(self.disable_excludes) conf.disable_excludes = _disable_excludes # Set releasever if self.releasever is not None: conf.substitutions['releasever'] = self.releasever # Set skip_broken (in dnf this is strict=0) if self.skip_broken: conf.strict = 0 # Set best if self.nobest: conf.best = 0 if self.download_only: conf.downloadonly = True if self.download_dir: conf.destdir = self.download_dir if self.cacheonly: conf.cacheonly = True # Default in dnf upstream is true conf.clean_requirements_on_remove = self.autoremove # Default in dnf (and module default) is True conf.install_weak_deps = self.install_weak_deps def _specify_repositories(self, base, disablerepo, enablerepo): """Enable and disable repositories matching the provided patterns.""" base.read_all_repos() repos = base.repos # Disable repositories for repo_pattern in disablerepo: if repo_pattern: for repo in repos.get_matching(repo_pattern): repo.disable() # Enable repositories for repo_pattern in enablerepo: if repo_pattern: for repo in repos.get_matching(repo_pattern): repo.enable() def _base(self, conf_file, disable_gpg_check, disablerepo, enablerepo, installroot, sslverify): """Return a fully configured dnf Base object.""" base = dnf.Base() self._configure_base(base, conf_file, disable_gpg_check, installroot, sslverify) try: # this method has been supported in dnf-4.2.17-6 or later # https://bugzilla.redhat.com/show_bug.cgi?id=1788212 base.setup_loggers() except AttributeError: pass try: base.init_plugins(set(self.disable_plugin), set(self.enable_plugin)) base.pre_configure_plugins() except AttributeError: pass # older versions of dnf didn't require this and don't have these methods self._specify_repositories(base, disablerepo, enablerepo) try: base.configure_plugins() except AttributeError: pass # older versions of dnf didn't require this and don't have these methods try: if self.update_cache: try: base.update_cache() except dnf.exceptions.RepoError as e: self.module.fail_json( msg="{0}".format(to_text(e)), results=[], rc=1 ) base.fill_sack(load_system_repo='auto') except dnf.exceptions.RepoError as e: self.module.fail_json( msg="{0}".format(to_text(e)), results=[], rc=1 ) add_security_filters = getattr(base, "add_security_filters", None) if callable(add_security_filters): filters = {} if self.bugfix: filters.setdefault('types', []).append('bugfix') if self.security: filters.setdefault('types', []).append('security') if filters: add_security_filters('eq', **filters) else: filters = [] if self.bugfix: key = {'advisory_type__eq': 'bugfix'} filters.append(base.sack.query().upgrades().filter(**key)) if self.security: key = {'advisory_type__eq': 'security'} filters.append(base.sack.query().upgrades().filter(**key)) if filters: base._update_security_filters = filters return base def list_items(self, command): """List package info based on the command.""" # Rename updates to upgrades if command == 'updates': command = 'upgrades' # Return the corresponding packages if command in ['installed', 'upgrades', 'available']: results = [ self._package_dict(package) for package in getattr(self.base.sack.query(), command)()] # Return the enabled repository ids elif command in ['repos', 'repositories']: results = [ {'repoid': repo.id, 'state': 'enabled'} for repo in self.base.repos.iter_enabled()] # Return any matching packages else: packages = dnf.subject.Subject(command).get_best_query(self.base.sack) results = [self._package_dict(package) for package in packages] self.module.exit_json(msg="", results=results) def _is_installed(self, pkg): installed = self.base.sack.query().installed() package_spec = {} name, arch = self._split_package_arch(pkg) if arch: package_spec['arch'] = arch package_details = self._packagename_dict(pkg) if package_details: package_details['epoch'] = int(package_details['epoch']) package_spec.update(package_details) else: package_spec['name'] = name if installed.filter(**package_spec): return True else: return False def _is_newer_version_installed(self, pkg_name): candidate_pkg = self._packagename_dict(pkg_name) if not candidate_pkg: # The user didn't provide a versioned rpm, so version checking is # not required return False installed = self.base.sack.query().installed() installed_pkg = installed.filter(name=candidate_pkg['name']).run() if installed_pkg: installed_pkg = installed_pkg[0] # this looks weird but one is a dict and the other is a dnf.Package evr_cmp = self._compare_evr( installed_pkg.epoch, installed_pkg.version, installed_pkg.release, candidate_pkg['epoch'], candidate_pkg['version'], candidate_pkg['release'], ) if evr_cmp == 1: return True else: return False else: return False def _mark_package_install(self, pkg_spec, upgrade=False): """Mark the package for install.""" is_newer_version_installed = self._is_newer_version_installed(pkg_spec) is_installed = self._is_installed(pkg_spec) try: if is_newer_version_installed: if self.allow_downgrade: # dnf only does allow_downgrade, we have to handle this ourselves # because it allows a possibility for non-idempotent transactions # on a system's package set (pending the yum repo has many old # NVRs indexed) if upgrade: if is_installed: self.base.upgrade(pkg_spec) else: self.base.install(pkg_spec) else: self.base.install(pkg_spec) else: # Nothing to do, report back pass elif is_installed: # An potentially older (or same) version is installed if upgrade: self.base.upgrade(pkg_spec) else: # Nothing to do, report back pass else: # The package is not installed, simply install it self.base.install(pkg_spec) return {'failed': False, 'msg': '', 'failure': '', 'rc': 0} except dnf.exceptions.MarkingError as e: return { 'failed': True, 'msg': "No package {0} available.".format(pkg_spec), 'failure': " ".join((pkg_spec, to_native(e))), 'rc': 1, "results": [] } except dnf.exceptions.DepsolveError as e: return { 'failed': True, 'msg': "Depsolve Error occurred for package {0}.".format(pkg_spec), 'failure': " ".join((pkg_spec, to_native(e))), 'rc': 1, "results": [] } except dnf.exceptions.Error as e: if to_text("already installed") in to_text(e): return {'failed': False, 'msg': '', 'failure': ''} else: return { 'failed': True, 'msg': "Unknown Error occurred for package {0}.".format(pkg_spec), 'failure': " ".join((pkg_spec, to_native(e))), 'rc': 1, "results": [] } def _whatprovides(self, filepath): self.base.read_all_repos() available = self.base.sack.query().available() # Search in file files_filter = available.filter(file=filepath) # And Search in provides pkg_spec = files_filter.union(available.filter(provides=filepath)).run() if pkg_spec: return pkg_spec[0].name def _parse_spec_group_file(self): pkg_specs, grp_specs, module_specs, filenames = [], [], [], [] already_loaded_comps = False # Only load this if necessary, it's slow for name in self.names: if '://' in name: name = fetch_file(self.module, name) filenames.append(name) elif name.endswith(".rpm"): filenames.append(name) elif name.startswith('/'): # like "dnf install /usr/bin/vi" pkg_spec = self._whatprovides(name) if pkg_spec: pkg_specs.append(pkg_spec) continue elif name.startswith("@") or ('/' in name): if not already_loaded_comps: self.base.read_comps() already_loaded_comps = True grp_env_mdl_candidate = name[1:].strip() if self.with_modules: mdl = self.module_base._get_modules(grp_env_mdl_candidate) if mdl[0]: module_specs.append(grp_env_mdl_candidate) else: grp_specs.append(grp_env_mdl_candidate) else: grp_specs.append(grp_env_mdl_candidate) else: pkg_specs.append(name) return pkg_specs, grp_specs, module_specs, filenames def _update_only(self, pkgs): not_installed = [] for pkg in pkgs: if self._is_installed(pkg): try: if isinstance(to_text(pkg), text_type): self.base.upgrade(pkg) else: self.base.package_upgrade(pkg) except Exception as e: self.module.fail_json( msg="Error occurred attempting update_only operation: {0}".format(to_native(e)), results=[], rc=1, ) else: not_installed.append(pkg) return not_installed def _install_remote_rpms(self, filenames): if int(dnf.__version__.split(".")[0]) >= 2: pkgs = list(sorted(self.base.add_remote_rpms(list(filenames)), reverse=True)) else: pkgs = [] try: for filename in filenames: pkgs.append(self.base.add_remote_rpm(filename)) except IOError as e: if to_text("Can not load RPM file") in to_text(e): self.module.fail_json( msg="Error occurred attempting remote rpm install of package: {0}. {1}".format(filename, to_native(e)), results=[], rc=1, ) if self.update_only: self._update_only(pkgs) else: for pkg in pkgs: try: if self._is_newer_version_installed(self._package_dict(pkg)['nevra']): if self.allow_downgrade: self.base.package_install(pkg) else: self.base.package_install(pkg) except Exception as e: self.module.fail_json( msg="Error occurred attempting remote rpm operation: {0}".format(to_native(e)), results=[], rc=1, ) def _is_module_installed(self, module_spec): if self.with_modules: module_spec = module_spec.strip() module_list, nsv = self.module_base._get_modules(module_spec) enabled_streams = self.base._moduleContainer.getEnabledStream(nsv.name) if enabled_streams: if nsv.stream: if nsv.stream in enabled_streams: return True # The provided stream was found else: return False # The provided stream was not found else: return True # No stream provided, but module found return False # seems like a sane default def ensure(self): response = { 'msg': "", 'changed': False, 'results': [], 'rc': 0 } # Accumulate failures. Package management modules install what they can # and fail with a message about what they can't. failure_response = { 'msg': "", 'failures': [], 'results': [], 'rc': 1 } # Autoremove is called alone # Jump to remove path where base.autoremove() is run if not self.names and self.autoremove: self.names = [] self.state = 'absent' if self.names == ['*'] and self.state == 'latest': try: self.base.upgrade_all() except dnf.exceptions.DepsolveError as e: failure_response['msg'] = "Depsolve Error occurred attempting to upgrade all packages" self.module.fail_json(**failure_response) else: pkg_specs, group_specs, module_specs, filenames = self._parse_spec_group_file() pkg_specs = [p.strip() for p in pkg_specs] filenames = [f.strip() for f in filenames] groups = [] environments = [] for group_spec in (g.strip() for g in group_specs): group = self.base.comps.group_by_pattern(group_spec) if group: groups.append(group.id) else: environment = self.base.comps.environment_by_pattern(group_spec) if environment: environments.append(environment.id) else: self.module.fail_json( msg="No group {0} available.".format(group_spec), results=[], ) if self.state in ['installed', 'present']: # Install files. self._install_remote_rpms(filenames) for filename in filenames: response['results'].append("Installed {0}".format(filename)) # Install modules if module_specs and self.with_modules: for module in module_specs: try: if not self._is_module_installed(module): response['results'].append("Module {0} installed.".format(module)) self.module_base.install([module]) self.module_base.enable([module]) except dnf.exceptions.MarkingErrors as e: failure_response['failures'].append(' '.join((module, to_native(e)))) # Install groups. for group in groups: try: group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES) if group_pkg_count_installed == 0: response['results'].append("Group {0} already installed.".format(group)) else: response['results'].append("Group {0} installed.".format(group)) except dnf.exceptions.DepsolveError as e: failure_response['msg'] = "Depsolve Error occurred attempting to install group: {0}".format(group) self.module.fail_json(**failure_response) except dnf.exceptions.Error as e: # In dnf 2.0 if all the mandatory packages in a group do # not install, an error is raised. We want to capture # this but still install as much as possible. failure_response['failures'].append(" ".join((group, to_native(e)))) for environment in environments: try: self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES) except dnf.exceptions.DepsolveError as e: failure_response['msg'] = "Depsolve Error occurred attempting to install environment: {0}".format(environment) self.module.fail_json(**failure_response) except dnf.exceptions.Error as e: failure_response['failures'].append(" ".join((environment, to_native(e)))) if module_specs and not self.with_modules: # This means that the group or env wasn't found in comps self.module.fail_json( msg="No group {0} available.".format(module_specs[0]), results=[], ) # Install packages. if self.update_only: not_installed = self._update_only(pkg_specs) for spec in not_installed: response['results'].append("Packages providing %s not installed due to update_only specified" % spec) else: for pkg_spec in pkg_specs: install_result = self._mark_package_install(pkg_spec) if install_result['failed']: if install_result['msg']: failure_response['msg'] += install_result['msg'] failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure'])) else: if install_result['msg']: response['results'].append(install_result['msg']) elif self.state == 'latest': # "latest" is same as "installed" for filenames. self._install_remote_rpms(filenames) for filename in filenames: response['results'].append("Installed {0}".format(filename)) # Upgrade modules if module_specs and self.with_modules: for module in module_specs: try: if self._is_module_installed(module): response['results'].append("Module {0} upgraded.".format(module)) self.module_base.upgrade([module]) except dnf.exceptions.MarkingErrors as e: failure_response['failures'].append(' '.join((module, to_native(e)))) for group in groups: try: try: self.base.group_upgrade(group) response['results'].append("Group {0} upgraded.".format(group)) except dnf.exceptions.CompsError: if not self.update_only: # If not already installed, try to install. group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES) if group_pkg_count_installed == 0: response['results'].append("Group {0} already installed.".format(group)) else: response['results'].append("Group {0} installed.".format(group)) except dnf.exceptions.Error as e: failure_response['failures'].append(" ".join((group, to_native(e)))) for environment in environments: try: try: self.base.environment_upgrade(environment) except dnf.exceptions.CompsError: # If not already installed, try to install. self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES) except dnf.exceptions.DepsolveError as e: failure_response['msg'] = "Depsolve Error occurred attempting to install environment: {0}".format(environment) except dnf.exceptions.Error as e: failure_response['failures'].append(" ".join((environment, to_native(e)))) if self.update_only: not_installed = self._update_only(pkg_specs) for spec in not_installed: response['results'].append("Packages providing %s not installed due to update_only specified" % spec) else: for pkg_spec in pkg_specs: # best effort causes to install the latest package # even if not previously installed self.base.conf.best = True install_result = self._mark_package_install(pkg_spec, upgrade=True) if install_result['failed']: if install_result['msg']: failure_response['msg'] += install_result['msg'] failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure'])) else: if install_result['msg']: response['results'].append(install_result['msg']) else: # state == absent if filenames: self.module.fail_json( msg="Cannot remove paths -- please specify package name.", results=[], ) # Remove modules if module_specs and self.with_modules: for module in module_specs: try: if self._is_module_installed(module): response['results'].append("Module {0} removed.".format(module)) self.module_base.remove([module]) self.module_base.disable([module]) self.module_base.reset([module]) except dnf.exceptions.MarkingErrors as e: failure_response['failures'].append(' '.join((module, to_native(e)))) for group in groups: try: self.base.group_remove(group) except dnf.exceptions.CompsError: # Group is already uninstalled. pass except AttributeError: # Group either isn't installed or wasn't marked installed at install time # because of DNF bug # # This is necessary until the upstream dnf API bug is fixed where installing # a group via the dnf API doesn't actually mark the group as installed # https://bugzilla.redhat.com/show_bug.cgi?id=1620324 pass for environment in environments: try: self.base.environment_remove(environment) except dnf.exceptions.CompsError: # Environment is already uninstalled. pass installed = self.base.sack.query().installed() for pkg_spec in pkg_specs: # short-circuit installed check for wildcard matching if '*' in pkg_spec: try: self.base.remove(pkg_spec) except dnf.exceptions.MarkingError as e: is_failure, handled_remove_error = self._sanitize_dnf_error_msg_remove(pkg_spec, to_native(e)) if is_failure: failure_response['failures'].append('{0} - {1}'.format(pkg_spec, to_native(e))) else: response['results'].append(handled_remove_error) continue installed_pkg = dnf.subject.Subject(pkg_spec).get_best_query( sack=self.base.sack).installed().run() for pkg in installed_pkg: self.base.remove(str(pkg)) # Like the dnf CLI we want to allow recursive removal of dependent # packages self.allowerasing = True if self.autoremove: self.base.autoremove() try: if not self.base.resolve(allow_erasing=self.allowerasing): if failure_response['failures']: failure_response['msg'] = 'Failed to install some of the specified packages' self.module.fail_json(**failure_response) response['msg'] = "Nothing to do" self.module.exit_json(**response) else: response['changed'] = True # If packages got installed/removed, add them to the results. # We do this early so we can use it for both check_mode and not. if self.download_only: install_action = 'Downloaded' else: install_action = 'Installed' for package in self.base.transaction.install_set: response['results'].append("{0}: {1}".format(install_action, package)) for package in self.base.transaction.remove_set: response['results'].append("Removed: {0}".format(package)) if failure_response['failures']: failure_response['msg'] = 'Failed to install some of the specified packages' self.module.fail_json(**failure_response) if self.module.check_mode: response['msg'] = "Check mode: No changes made, but would have if not in check mode" self.module.exit_json(**response) try: if self.download_only and self.download_dir and self.base.conf.destdir: dnf.util.ensure_dir(self.base.conf.destdir) self.base.repos.all().pkgdir = self.base.conf.destdir self.base.download_packages(self.base.transaction.install_set) except dnf.exceptions.DownloadError as e: self.module.fail_json( msg="Failed to download packages: {0}".format(to_text(e)), results=[], ) # Validate GPG. This is NOT done in dnf.Base (it's done in the # upstream CLI subclass of dnf.Base) if not self.disable_gpg_check: for package in self.base.transaction.install_set: fail = False gpgres, gpgerr = self.base._sig_check_pkg(package) if gpgres == 0: # validated successfully continue elif gpgres == 1: # validation failed, install cert? try: self.base._get_key_for_package(package) except dnf.exceptions.Error as e: fail = True else: # fatal error fail = True if fail: msg = 'Failed to validate GPG signature for {0}: {1}'.format(package, gpgerr) self.module.fail_json(msg) if self.download_only: # No further work left to do, and the results were already updated above. # Just return them. self.module.exit_json(**response) else: tid = self.base.do_transaction() if tid is not None: transaction = self.base.history.old([tid])[0] if transaction.return_code: failure_response['failures'].append(transaction.output()) if failure_response['failures']: failure_response['msg'] = 'Failed to install some of the specified packages' self.module.fail_json(**failure_response) self.module.exit_json(**response) except dnf.exceptions.DepsolveError as e: failure_response['msg'] = "Depsolve Error occurred: {0}".format(to_native(e)) self.module.fail_json(**failure_response) except dnf.exceptions.Error as e: if to_text("already installed") in to_text(e): response['changed'] = False response['results'].append("Package already installed: {0}".format(to_native(e))) self.module.exit_json(**response) else: failure_response['msg'] = "Unknown Error occurred: {0}".format(to_native(e)) self.module.fail_json(**failure_response) def run(self): """The main function.""" # Check if autoremove is called correctly if self.autoremove: if LooseVersion(dnf.__version__) < LooseVersion('2.0.1'): self.module.fail_json( msg="Autoremove requires dnf>=2.0.1. Current dnf version is %s" % dnf.__version__, results=[], ) # Check if download_dir is called correctly if self.download_dir: if LooseVersion(dnf.__version__) < LooseVersion('2.6.2'): self.module.fail_json( msg="download_dir requires dnf>=2.6.2. Current dnf version is %s" % dnf.__version__, results=[], ) if self.update_cache and not self.names and not self.list: self.base = self._base( self.conf_file, self.disable_gpg_check, self.disablerepo, self.enablerepo, self.installroot, self.sslverify ) self.module.exit_json( msg="Cache updated", changed=False, results=[], rc=0 ) # Set state as installed by default # This is not set in AnsibleModule() because the following shouldn't happen # - dnf: autoremove=yes state=installed if self.state is None: self.state = 'installed' if self.list: self.base = self._base( self.conf_file, self.disable_gpg_check, self.disablerepo, self.enablerepo, self.installroot, self.sslverify ) self.list_items(self.list) else: # Note: base takes a long time to run so we want to check for failure # before running it. if not self.download_only and not dnf.util.am_i_root(): self.module.fail_json( msg="This command has to be run under the root user.", results=[], ) self.base = self._base( self.conf_file, self.disable_gpg_check, self.disablerepo, self.enablerepo, self.installroot, self.sslverify ) if self.with_modules: self.module_base = dnf.module.module_base.ModuleBase(self.base) self.ensure() def main(): # state=installed name=pkgspec # state=removed name=pkgspec # state=latest name=pkgspec # # informational commands: # list=installed # list=updates # list=available # list=repos # list=pkgspec # Extend yumdnf_argument_spec with dnf-specific features that will never be # backported to yum because yum is now in "maintenance mode" upstream yumdnf_argument_spec['argument_spec']['allowerasing'] = dict(default=False, type='bool') yumdnf_argument_spec['argument_spec']['nobest'] = dict(default=False, type='bool') module = AnsibleModule( **yumdnf_argument_spec ) module_implementation = DnfModule(module) try: module_implementation.run() except dnf.exceptions.RepoError as de: module.fail_json( msg="Failed to synchronize repodata: {0}".format(to_native(de)), rc=1, results=[], changed=False ) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
77,047
please add/elaborate when variables need to be wrapped in jinja delimiters
### Summary I'm studying Red Hat's RH294 and ansible in general. One thing that often confuses me is when to warp variables in jinja delimiters `{{ }}` and when not. [playbooks_variables.rst ](https://docs.ansible.com/ansible/2.9/user_guide/playbooks_variables.html#using-variables-with-jinja2) says: > Once you’ve defined variables, you can use them in your playbooks using the Jinja2 templating system. Here’s a simple Jinja2 template: > > `My amp goes to {{ max_amp_value }}` But many modules/conditionals have implicit jinja wrapping and fail when a variable is wrapped in `{{ }}`. The [builtin `debug` module's `var` parameter](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/debug_module.html#parameter-var) is one of the few places so far where I found this to be documented explicitly. The [`when` clause](https://docs.ansible.com/ansible/latest/user_guide/playbooks_conditionals.html) is another example where it is documented that it is a jinja expression and does not require wrapping. However: `failed_when`, `changed_when` also require variables to used as-is but that is not documented at [conditionals](https://docs.ansible.com/ansible/latest/user_guide/playbooks_error_handling.html) AFAICT. ### Issue Type Documentation Report ### Component Name playbooks_variables.rst ### Ansible Version ```console n/a ``` ### Configuration ```console n/a ``` ### OS / Environment n/a ### Additional Information Adding an explanation that some modules/variables/statements run in a jinja context and others don't and how this affects the need to wrap variables would help newcomers create correct playbooks quicker. Adding a similar note about the jinja context to playbooks_error_handling.rst as seen on playbooks_conditionals.rst would be helpful as well. ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77047
https://github.com/ansible/ansible/pull/77051
e620b96f49c6b343963a01f1ff6c5419869df2df
84b85a5b5a53a56c460bf4b68b5126fd2ccdc03a
2022-02-17T09:49:40Z
python
2022-02-17T15:28:12Z
docs/docsite/rst/reference_appendices/faq.rst
.. _ansible_faq: Frequently Asked Questions ========================== Here are some commonly asked questions and their answers. .. _collections_transition: Where did all the modules go? +++++++++++++++++++++++++++++ In July, 2019, we announced that collections would be the `future of Ansible content delivery <https://www.ansible.com/blog/the-future-of-ansible-content-delivery>`_. A collection is a distribution format for Ansible content that can include playbooks, roles, modules, and plugins. In Ansible 2.9 we added support for collections. In Ansible 2.10 we `extracted most modules from the main ansible/ansible repository <https://access.redhat.com/solutions/5295121>`_ and placed them in :ref:`collections <list_of_collections>`. Collections may be maintained by the Ansible team, by the Ansible community, or by Ansible partners. The `ansible/ansible repository <https://github.com/ansible/ansible>`_ now contains the code for basic features and functions, such as copying module code to managed nodes. This code is also known as ``ansible-core`` (it was briefly called ``ansible-base`` for version 2.10). * To learn more about using collections, see :ref:`collections`. * To learn more about developing collections, see :ref:`developing_collections`. * To learn more about contributing to existing collections, see the individual collection repository for guidelines, or see :ref:`contributing_maintained_collections` to contribute to one of the Ansible-maintained collections. .. _find_my_module: Where did this specific module go? ++++++++++++++++++++++++++++++++++ IF you are searching for a specific module, you can check the `runtime.yml <https://github.com/ansible/ansible/blob/devel/lib/ansible/config/ansible_builtin_runtime.yml>`_ file, which lists the first destination for each module that we extracted from the main ansible/ansible repository. Some modules have moved again since then. You can also search on `Ansible Galaxy <https://galaxy.ansible.com/>`_ or ask on one of our :ref:`chat channels <communication_irc>`. .. _set_environment: How can I set the PATH or any other environment variable for a task or entire play? +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Setting environment variables can be done with the `environment` keyword. It can be used at the task or other levels in the play. .. code-block:: yaml shell: cmd: date environment: LANG=fr_FR.UTF-8 .. code-block:: yaml hosts: servers environment: PATH: "{{ ansible_env.PATH }}:/thingy/bin" SOME: value .. note:: starting in 2.0.1 the setup task from ``gather_facts`` also inherits the environment directive from the play, you might need to use the ``|default`` filter to avoid errors if setting this at play level. .. _faq_setting_users_and_ports: How do I handle different machines needing different user accounts or ports to log in with? +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Setting inventory variables in the inventory file is the easiest way. For instance, suppose these hosts have different usernames and ports: .. code-block:: ini [webservers] asdf.example.com ansible_port=5000 ansible_user=alice jkl.example.com ansible_port=5001 ansible_user=bob You can also dictate the connection type to be used, if you want: .. code-block:: ini [testcluster] localhost ansible_connection=local /path/to/chroot1 ansible_connection=chroot foo.example.com ansible_connection=paramiko You may also wish to keep these in group variables instead, or file them in a group_vars/<groupname> file. See the rest of the documentation for more information about how to organize variables. .. _use_ssh: How do I get ansible to reuse connections, enable Kerberized SSH, or have Ansible pay attention to my local SSH config file? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Switch your default connection type in the configuration file to ``ssh``, or use ``-c ssh`` to use Native OpenSSH for connections instead of the python paramiko library. In Ansible 1.2.1 and later, ``ssh`` will be used by default if OpenSSH is new enough to support ControlPersist as an option. Paramiko is great for starting out, but the OpenSSH type offers many advanced options. You will want to run Ansible from a machine new enough to support ControlPersist, if you are using this connection type. You can still manage older clients. If you are using RHEL 6, CentOS 6, SLES 10 or SLES 11 the version of OpenSSH is still a bit old, so consider managing from a Fedora or openSUSE client even though you are managing older nodes, or just use paramiko. We keep paramiko as the default as if you are first installing Ansible on these enterprise operating systems, it offers a better experience for new users. .. _use_ssh_jump_hosts: How do I configure a jump host to access servers that I have no direct access to? +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ You can set a ``ProxyCommand`` in the ``ansible_ssh_common_args`` inventory variable. Any arguments specified in this variable are added to the sftp/scp/ssh command line when connecting to the relevant host(s). Consider the following inventory group: .. code-block:: ini [gatewayed] foo ansible_host=192.0.2.1 bar ansible_host=192.0.2.2 You can create `group_vars/gatewayed.yml` with the following contents:: ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q [email protected]"' Ansible will append these arguments to the command line when trying to connect to any hosts in the group ``gatewayed``. (These arguments are used in addition to any ``ssh_args`` from ``ansible.cfg``, so you do not need to repeat global ``ControlPersist`` settings in ``ansible_ssh_common_args``.) Note that ``ssh -W`` is available only with OpenSSH 5.4 or later. With older versions, it's necessary to execute ``nc %h:%p`` or some equivalent command on the bastion host. With earlier versions of Ansible, it was necessary to configure a suitable ``ProxyCommand`` for one or more hosts in ``~/.ssh/config``, or globally by setting ``ssh_args`` in ``ansible.cfg``. .. _ssh_serveraliveinterval: How do I get Ansible to notice a dead target in a timely manner? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ You can add ``-o ServerAliveInterval=NumberOfSeconds`` in ``ssh_args`` from ``ansible.cfg``. Without this option, SSH and therefore Ansible will wait until the TCP connection times out. Another solution is to add ``ServerAliveInterval`` into your global SSH configuration. A good value for ``ServerAliveInterval`` is up to you to decide; keep in mind that ``ServerAliveCountMax=3`` is the SSH default so any value you set will be tripled before terminating the SSH session. .. _cloud_provider_performance: How do I speed up run of ansible for servers from cloud providers (EC2, openstack,.. )? +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Don't try to manage a fleet of machines of a cloud provider from your laptop. Rather connect to a management node inside this cloud provider first and run Ansible from there. .. _python_interpreters: How do I handle not having a Python interpreter at /usr/bin/python on a remote machine? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ While you can write Ansible modules in any language, most Ansible modules are written in Python, including the ones central to letting Ansible work. By default, Ansible assumes it can find a :command:`/usr/bin/python` on your remote system that is either Python2, version 2.6 or higher or Python3, 3.5 or higher. Setting the inventory variable ``ansible_python_interpreter`` on any host will tell Ansible to auto-replace the Python interpreter with that value instead. Thus, you can point to any Python you want on the system if :command:`/usr/bin/python` on your system does not point to a compatible Python interpreter. Some platforms may only have Python 3 installed by default. If it is not installed as :command:`/usr/bin/python`, you will need to configure the path to the interpreter via ``ansible_python_interpreter``. Although most core modules will work with Python 3, there may be some special purpose ones which do not or you may encounter a bug in an edge case. As a temporary workaround you can install Python 2 on the managed host and configure Ansible to use that Python via ``ansible_python_interpreter``. If there's no mention in the module's documentation that the module requires Python 2, you can also report a bug on our `bug tracker <https://github.com/ansible/ansible/issues>`_ so that the incompatibility can be fixed in a future release. Do not replace the shebang lines of your python modules. Ansible will do this for you automatically at deploy time. Also, this works for ANY interpreter, for example ruby: ``ansible_ruby_interpreter``, perl: ``ansible_perl_interpreter``, and so on, so you can use this for custom modules written in any scripting language and control the interpreter location. Keep in mind that if you put ``env`` in your module shebang line (``#!/usr/bin/env <other>``), this facility will be ignored so you will be at the mercy of the remote `$PATH`. .. _installation_faqs: How do I handle the package dependencies required by Ansible package dependencies during Ansible installation ? +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ While installing Ansible, sometimes you may encounter errors such as `No package 'libffi' found` or `fatal error: Python.h: No such file or directory` These errors are generally caused by the missing packages, which are dependencies of the packages required by Ansible. For example, `libffi` package is dependency of `pynacl` and `paramiko` (Ansible -> paramiko -> pynacl -> libffi). In order to solve these kinds of dependency issues, you might need to install required packages using the OS native package managers, such as `yum`, `dnf`, or `apt`, or as mentioned in the package installation guide. Refer to the documentation of the respective package for such dependencies and their installation methods. Common Platform Issues ++++++++++++++++++++++ What customer platforms does Red Hat support? --------------------------------------------- A number of them! For a definitive list please see this `Knowledge Base article <https://access.redhat.com/articles/3168091>`_. Running in a virtualenv ----------------------- You can install Ansible into a virtualenv on the controller quite simply: .. code-block:: shell $ virtualenv ansible $ source ./ansible/bin/activate $ pip install ansible If you want to run under Python 3 instead of Python 2 you may want to change that slightly: .. code-block:: shell $ virtualenv -p python3 ansible $ source ./ansible/bin/activate $ pip install ansible If you need to use any libraries which are not available via pip (for instance, SELinux Python bindings on systems such as Red Hat Enterprise Linux or Fedora that have SELinux enabled), then you need to install them into the virtualenv. There are two methods: * When you create the virtualenv, specify ``--system-site-packages`` to make use of any libraries installed in the system's Python: .. code-block:: shell $ virtualenv ansible --system-site-packages * Copy those files in manually from the system. For instance, for SELinux bindings you might do: .. code-block:: shell $ virtualenv ansible --system-site-packages $ cp -r -v /usr/lib64/python3.*/site-packages/selinux/ ./py3-ansible/lib64/python3.*/site-packages/ $ cp -v /usr/lib64/python3.*/site-packages/*selinux*.so ./py3-ansible/lib64/python3.*/site-packages/ Running on BSD -------------- .. seealso:: :ref:`working_with_bsd` Running on Solaris ------------------ By default, Solaris 10 and earlier run a non-POSIX shell which does not correctly expand the default tmp directory Ansible uses ( :file:`~/.ansible/tmp`). If you see module failures on Solaris machines, this is likely the problem. There are several workarounds: * You can set ``remote_tmp`` to a path that will expand correctly with the shell you are using (see the plugin documentation for :ref:`C shell<csh_shell>`, :ref:`fish shell<fish_shell>`, and :ref:`Powershell<powershell_shell>`). For example, in the ansible config file you can set:: remote_tmp=$HOME/.ansible/tmp In Ansible 2.5 and later, you can also set it per-host in inventory like this:: solaris1 ansible_remote_tmp=$HOME/.ansible/tmp * You can set :ref:`ansible_shell_executable<ansible_shell_executable>` to the path to a POSIX compatible shell. For instance, many Solaris hosts have a POSIX shell located at :file:`/usr/xpg4/bin/sh` so you can set this in inventory like so:: solaris1 ansible_shell_executable=/usr/xpg4/bin/sh (bash, ksh, and zsh should also be POSIX compatible if you have any of those installed). Running on z/OS --------------- There are a few common errors that one might run into when trying to execute Ansible on z/OS as a target. * Version 2.7.6 of python for z/OS will not work with Ansible because it represents strings internally as EBCDIC. To get around this limitation, download and install a later version of `python for z/OS <https://www.rocketsoftware.com/zos-open-source>`_ (2.7.13 or 3.6.1) that represents strings internally as ASCII. Version 2.7.13 is verified to work. * When ``pipelining = False`` in `/etc/ansible/ansible.cfg` then Ansible modules are transferred in binary mode via sftp however execution of python fails with .. error:: SyntaxError: Non-UTF-8 code starting with \'\\x83\' in file /a/user1/.ansible/tmp/ansible-tmp-1548232945.35-274513842609025/AnsiballZ_stat.py on line 1, but no encoding declared; see https://python.org/dev/peps/pep-0263/ for details To fix it set ``pipelining = True`` in `/etc/ansible/ansible.cfg`. * Python interpret cannot be found in default location ``/usr/bin/python`` on target host. .. error:: /usr/bin/python: EDC5129I No such file or directory To fix this set the path to the python installation in your inventory like so:: zos1 ansible_python_interpreter=/usr/lpp/python/python-2017-04-12-py27/python27/bin/python * Start of python fails with ``The module libpython2.7.so was not found.`` .. error:: EE3501S The module libpython2.7.so was not found. On z/OS, you must execute python from gnu bash. If gnu bash is installed at ``/usr/lpp/bash``, you can fix this in your inventory by specifying an ``ansible_shell_executable``:: zos1 ansible_shell_executable=/usr/lpp/bash/bin/bash Running under fakeroot ---------------------- Some issues arise as ``fakeroot`` does not create a full nor POSIX compliant system by default. It is known that it will not correctly expand the default tmp directory Ansible uses (:file:`~/.ansible/tmp`). If you see module failures, this is likely the problem. The simple workaround is to set ``remote_tmp`` to a path that will expand correctly (see documentation of the shell plugin you are using for specifics). For example, in the ansible config file (or via environment variable) you can set:: remote_tmp=$HOME/.ansible/tmp .. _use_roles: What is the best way to make content reusable/redistributable? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ If you have not done so already, read all about "Roles" in the playbooks documentation. This helps you make playbook content self-contained, and works well with things like git submodules for sharing content with others. If some of these plugin types look strange to you, see the API documentation for more details about ways Ansible can be extended. .. _configuration_file: Where does the configuration file live and what can I configure in it? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ See :ref:`intro_configuration`. .. _who_would_ever_want_to_disable_cowsay_but_ok_here_is_how: How do I disable cowsay? ++++++++++++++++++++++++ If cowsay is installed, Ansible takes it upon itself to make your day happier when running playbooks. If you decide that you would like to work in a professional cow-free environment, you can either uninstall cowsay, set ``nocows=1`` in ``ansible.cfg``, or set the :envvar:`ANSIBLE_NOCOWS` environment variable: .. code-block:: shell-session export ANSIBLE_NOCOWS=1 .. _browse_facts: How do I see a list of all of the ansible\_ variables? ++++++++++++++++++++++++++++++++++++++++++++++++++++++ Ansible by default gathers "facts" about the machines under management, and these facts can be accessed in playbooks and in templates. To see a list of all of the facts that are available about a machine, you can run the ``setup`` module as an ad hoc action: .. code-block:: shell-session ansible -m setup hostname This will print out a dictionary of all of the facts that are available for that particular host. You might want to pipe the output to a pager.This does NOT include inventory variables or internal 'magic' variables. See the next question if you need more than just 'facts'. .. _browse_inventory_vars: How do I see all the inventory variables defined for my host? +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ By running the following command, you can see inventory variables for a host: .. code-block:: shell-session ansible-inventory --list --yaml .. _browse_host_vars: How do I see all the variables specific to my host? +++++++++++++++++++++++++++++++++++++++++++++++++++ To see all host specific variables, which might include facts and other sources: .. code-block:: shell-session ansible -m debug -a "var=hostvars['hostname']" localhost Unless you are using a fact cache, you normally need to use a play that gathers facts first, for facts included in the task above. .. _host_loops: How do I loop over a list of hosts in a group, inside of a template? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ A pretty common pattern is to iterate over a list of hosts inside of a host group, perhaps to populate a template configuration file with a list of servers. To do this, you can just access the "$groups" dictionary in your template, like this: .. code-block:: jinja {% for host in groups['db_servers'] %} {{ host }} {% endfor %} If you need to access facts about these hosts, for instance, the IP address of each hostname, you need to make sure that the facts have been populated. For example, make sure you have a play that talks to db_servers:: - hosts: db_servers tasks: - debug: msg="doesn't matter what you do, just that they were talked to previously." Then you can use the facts inside your template, like this: .. code-block:: jinja {% for host in groups['db_servers'] %} {{ hostvars[host]['ansible_eth0']['ipv4']['address'] }} {% endfor %} .. _programatic_access_to_a_variable: How do I access a variable name programmatically? +++++++++++++++++++++++++++++++++++++++++++++++++ An example may come up where we need to get the ipv4 address of an arbitrary interface, where the interface to be used may be supplied via a role parameter or other input. Variable names can be built by adding strings together using "~", like so: .. code-block:: jinja {{ hostvars[inventory_hostname]['ansible_' ~ which_interface]['ipv4']['address'] }} The trick about going through hostvars is necessary because it's a dictionary of the entire namespace of variables. ``inventory_hostname`` is a magic variable that indicates the current host you are looping over in the host loop. In the example above, if your interface names have dashes, you must replace them with underscores: .. code-block:: jinja {{ hostvars[inventory_hostname]['ansible_' ~ which_interface | replace('_', '-') ]['ipv4']['address'] }} Also see dynamic_variables_. .. _access_group_variable: How do I access a group variable? +++++++++++++++++++++++++++++++++ Technically, you don't, Ansible does not really use groups directly. Groups are labels for host selection and a way to bulk assign variables, they are not a first class entity, Ansible only cares about Hosts and Tasks. That said, you could just access the variable by selecting a host that is part of that group, see first_host_in_a_group_ below for an example. .. _first_host_in_a_group: How do I access a variable of the first host in a group? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++ What happens if we want the ip address of the first webserver in the webservers group? Well, we can do that too. Note that if we are using dynamic inventory, which host is the 'first' may not be consistent, so you wouldn't want to do this unless your inventory is static and predictable. (If you are using AWX or the :ref:`Red Hat Ansible Automation Platform <ansible_platform>`, it will use database order, so this isn't a problem even if you are using cloud based inventory scripts). Anyway, here's the trick: .. code-block:: jinja {{ hostvars[groups['webservers'][0]]['ansible_eth0']['ipv4']['address'] }} Notice how we're pulling out the hostname of the first machine of the webservers group. If you are doing this in a template, you could use the Jinja2 '#set' directive to simplify this, or in a playbook, you could also use set_fact:: - set_fact: headnode={{ groups['webservers'][0] }} - debug: msg={{ hostvars[headnode].ansible_eth0.ipv4.address }} Notice how we interchanged the bracket syntax for dots -- that can be done anywhere. .. _file_recursion: How do I copy files recursively onto a target host? +++++++++++++++++++++++++++++++++++++++++++++++++++ The ``copy`` module has a recursive parameter. However, take a look at the ``synchronize`` module if you want to do something more efficient for a large number of files. The ``synchronize`` module wraps rsync. See the module index for info on both of these modules. .. _shell_env: How do I access shell environment variables? ++++++++++++++++++++++++++++++++++++++++++++ **On controller machine :** Access existing variables from controller use the ``env`` lookup plugin. For example, to access the value of the HOME environment variable on the management machine:: --- # ... vars: local_home: "{{ lookup('env','HOME') }}" **On target machines :** Environment variables are available via facts in the ``ansible_env`` variable: .. code-block:: jinja {{ ansible_env.HOME }} If you need to set environment variables for TASK execution, see :ref:`playbooks_environment` in the :ref:`Advanced Playbooks <playbooks_special_topics>` section. There are several ways to set environment variables on your target machines. You can use the :ref:`template <template_module>`, :ref:`replace <replace_module>`, or :ref:`lineinfile <lineinfile_module>` modules to introduce environment variables into files. The exact files to edit vary depending on your OS and distribution and local configuration. .. _user_passwords: How do I generate encrypted passwords for the user module? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Ansible ad hoc command is the easiest option: .. code-block:: shell-session ansible all -i localhost, -m debug -a "msg={{ 'mypassword' | password_hash('sha512', 'mysecretsalt') }}" The ``mkpasswd`` utility that is available on most Linux systems is also a great option: .. code-block:: shell-session mkpasswd --method=sha-512 If this utility is not installed on your system (for example, you are using macOS) then you can still easily generate these passwords using Python. First, ensure that the `Passlib <https://foss.heptapod.net/python-libs/passlib/-/wikis/home>`_ password hashing library is installed: .. code-block:: shell-session pip install passlib Once the library is ready, SHA512 password values can then be generated as follows: .. code-block:: shell-session python -c "from passlib.hash import sha512_crypt; import getpass; print(sha512_crypt.using(rounds=5000).hash(getpass.getpass()))" Use the integrated :ref:`hash_filters` to generate a hashed version of a password. You shouldn't put plaintext passwords in your playbook or host_vars; instead, use :ref:`playbooks_vault` to encrypt sensitive data. In OpenBSD, a similar option is available in the base system called ``encrypt (1)`` .. _dot_or_array_notation: Ansible allows dot notation and array notation for variables. Which notation should I use? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The dot notation comes from Jinja and works fine for variables without special characters. If your variable contains dots (.), colons (:), or dashes (-), if a key begins and ends with two underscores, or if a key uses any of the known public attributes, it is safer to use the array notation. See :ref:`playbooks_variables` for a list of the known public attributes. .. code-block:: jinja item[0]['checksum:md5'] item['section']['2.1'] item['region']['Mid-Atlantic'] It is {{ temperature['Celsius']['-3'] }} outside. Also array notation allows for dynamic variable composition, see dynamic_variables_. Another problem with 'dot notation' is that some keys can cause problems because they collide with attributes and methods of python dictionaries. .. code-block:: jinja item.update # this breaks if item is a dictionary, as 'update()' is a python method for dictionaries item['update'] # this works .. _argsplat_unsafe: When is it unsafe to bulk-set task arguments from a variable? +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ You can set all of a task's arguments from a dictionary-typed variable. This technique can be useful in some dynamic execution scenarios. However, it introduces a security risk. We do not recommend it, so Ansible issues a warning when you do something like this:: #... vars: usermod_args: name: testuser state: present update_password: always tasks: - user: '{{ usermod_args }}' This particular example is safe. However, constructing tasks like this is risky because the parameters and values passed to ``usermod_args`` could be overwritten by malicious values in the ``host facts`` on a compromised target machine. To mitigate this risk: * set bulk variables at a level of precedence greater than ``host facts`` in the order of precedence found in :ref:`ansible_variable_precedence` (the example above is safe because play vars take precedence over facts) * disable the :ref:`inject_facts_as_vars` configuration setting to prevent fact values from colliding with variables (this will also disable the original warning) .. _commercial_support: Can I get training on Ansible? ++++++++++++++++++++++++++++++ Yes! See our `services page <https://www.ansible.com/products/consulting>`_ for information on our services and training offerings. Email `[email protected] <mailto:[email protected]>`_ for further details. We also offer free web-based training classes on a regular basis. See our `webinar page <https://www.ansible.com/resources/webinars-training>`_ for more info on upcoming webinars. .. _web_interface: Is there a web interface / REST API / GUI? ++++++++++++++++++++++++++++++++++++++++++++ Yes! The open-source web interface is Ansible AWX. The supported Red Hat product that makes Ansible even more powerful and easy to use is :ref:`Red Hat Ansible Automation Platform <ansible_platform>`. .. _keep_secret_data: How do I keep secret data in my playbook? +++++++++++++++++++++++++++++++++++++++++ If you would like to keep secret data in your Ansible content and still share it publicly or keep things in source control, see :ref:`playbooks_vault`. If you have a task that you don't want to show the results or command given to it when using -v (verbose) mode, the following task or playbook attribute can be useful:: - name: secret task shell: /usr/bin/do_something --value={{ secret_value }} no_log: True This can be used to keep verbose output but hide sensitive information from others who would otherwise like to be able to see the output. The ``no_log`` attribute can also apply to an entire play:: - hosts: all no_log: True Though this will make the play somewhat difficult to debug. It's recommended that this be applied to single tasks only, once a playbook is completed. Note that the use of the ``no_log`` attribute does not prevent data from being shown when debugging Ansible itself via the :envvar:`ANSIBLE_DEBUG` environment variable. .. _when_to_use_brackets: .. _dynamic_variables: .. _interpolate_variables: When should I use {{ }}? Also, how to interpolate variables or dynamic variable names +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ A steadfast rule is 'always use ``{{ }}`` except when ``when:``'. Conditionals are always run through Jinja2 as to resolve the expression, so ``when:``, ``failed_when:`` and ``changed_when:`` are always templated and you should avoid adding ``{{ }}``. In most other cases you should always use the brackets, even if previously you could use variables without specifying (like ``loop`` or ``with_`` clauses), as this made it hard to distinguish between an undefined variable and a string. Another rule is 'moustaches don't stack'. We often see this: .. code-block:: jinja {{ somevar_{{other_var}} }} The above DOES NOT WORK as you expect, if you need to use a dynamic variable use the following as appropriate: .. code-block:: jinja {{ hostvars[inventory_hostname]['somevar_' ~ other_var] }} For 'non host vars' you can use the :ref:`vars lookup<vars_lookup>` plugin: .. code-block:: jinja {{ lookup('vars', 'somevar_' ~ other_var) }} .. _why_no_wheel: Why don't you ship ansible in wheel format (or other packaging format) ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ In most cases it has to do with maintainability. There are many ways to ship software and we do not have the resources to release Ansible on every platform. In some cases there are technical issues. For example, our dependencies are not present on Python Wheels. .. _ansible_host_delegated: How do I get the original ansible_host when I delegate a task? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ As the documentation states, connection variables are taken from the ``delegate_to`` host so ``ansible_host`` is overwritten, but you can still access the original via ``hostvars``:: original_host: "{{ hostvars[inventory_hostname]['ansible_host'] }}" This works for all overridden connection variables, like ``ansible_user``, ``ansible_port``, and so on. .. _scp_protocol_error_filename: How do I fix 'protocol error: filename does not match request' when fetching a file? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Since release ``7.9p1`` of OpenSSH there is a `bug <https://bugzilla.mindrot.org/show_bug.cgi?id=2966>`_ in the SCP client that can trigger this error on the Ansible controller when using SCP as the file transfer mechanism:: failed to transfer file to /tmp/ansible/file.txt\r\nprotocol error: filename does not match request In these releases, SCP tries to validate that the path of the file to fetch matches the requested path. The validation fails if the remote filename requires quotes to escape spaces or non-ascii characters in its path. To avoid this error: * Use SFTP instead of SCP by setting ``scp_if_ssh`` to ``smart`` (which tries SFTP first) or to ``False``. You can do this in one of four ways: * Rely on the default setting, which is ``smart`` - this works if ``scp_if_ssh`` is not explicitly set anywhere * Set a :ref:`host variable <host_variables>` or :ref:`group variable <group_variables>` in inventory: ``ansible_scp_if_ssh: False`` * Set an environment variable on your control node: ``export ANSIBLE_SCP_IF_SSH=False`` * Pass an environment variable when you run Ansible: ``ANSIBLE_SCP_IF_SSH=smart ansible-playbook`` * Modify your ``ansible.cfg`` file: add ``scp_if_ssh=False`` to the ``[ssh_connection]`` section * If you must use SCP, set the ``-T`` arg to tell the SCP client to ignore path validation. You can do this in one of three ways: * Set a :ref:`host variable <host_variables>` or :ref:`group variable <group_variables>`: ``ansible_scp_extra_args=-T``, * Export or pass an environment variable: ``ANSIBLE_SCP_EXTRA_ARGS=-T`` * Modify your ``ansible.cfg`` file: add ``scp_extra_args=-T`` to the ``[ssh_connection]`` section .. note:: If you see an ``invalid argument`` error when using ``-T``, then your SCP client is not performing filename validation and will not trigger this error. .. _mfa_support: Does Ansible support multiple factor authentication 2FA/MFA/biometrics/finterprint/usbkey/OTP/... +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ No, Ansible is designed to execute multiple tasks against multiple targets, minimizing user interaction. As with most automation tools, it is not compatible with interactive security systems designed to handle human interaction. Most of these systems require a secondary prompt per target, which prevents scaling to thousands of targets. They also tend to have very short expiration periods so it requires frequent reauthorization, also an issue with many hosts and/or a long set of tasks. In such environments we recommend securing around Ansible's execution but still allowing it to use an 'automation user' that does not require such measures. With AWX or the :ref:`Red Hat Ansible Automation Platform <ansible_platform>`, administrators can set up RBAC access to inventory, along with managing credentials and job execution. .. _complex_configuration_validation: The 'validate' option is not enough for my needs, what do I do? +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Many Ansible modules that create or update files have a ``validate`` option that allows you to abort the update if the validation command fails. This uses the temporary file Ansible creates before doing the final update. In many cases this does not work since the validation tools for the specific application require either specific names, multiple files or some other factor that is not present in this simple feature. For these cases you have to handle the validation and restoration yourself. The following is a simple example of how to do this with block/rescue and backups, which most file based modules also support: .. code-block:: yaml - name: update config and backout if validation fails block: - name: do the actual update, works with copy, lineinfile and any action that allows for `backup`. template: src=template.j2 dest=/x/y/z backup=yes moreoptions=stuff register: updated - name: run validation, this will change a lot as needed. We assume it returns an error when not passing, use `failed_when` if otherwise. shell: run_validation_commmand become: yes become_user: requiredbyapp environment: WEIRD_REQUIREMENT: 1 rescue: - name: restore backup file to original, in the hope the previous configuration was working. copy: remote_src: yes dest: /x/y/z src: "{{ updated['backup_file'] }}" always: - name: We choose to always delete backup, but could copy or move, or only delete in rescue. file: path: "{{ updated['backup_file'] }}" state: absent .. _docs_contributions: How do I submit a change to the documentation? ++++++++++++++++++++++++++++++++++++++++++++++ Documentation for Ansible is kept in the main project git repository, and complete instructions for contributing can be found in the docs README `viewable on GitHub <https://github.com/ansible/ansible/blob/devel/docs/docsite/README.md>`_. Thanks! .. _i_dont_see_my_question: I don't see my question here ++++++++++++++++++++++++++++ If you have not found an answer to your questions, you can ask on one of our mailing lists or chat channels. For instructions on subscribing to a list or joining a chat channel, see :ref:`communication`. .. seealso:: :ref:`working_with_playbooks` An introduction to playbooks :ref:`playbooks_best_practices` Tips and tricks for playbooks `User Mailing List <https://groups.google.com/group/ansible-project>`_ Have a question? Stop by the google group!
closed
ansible/ansible
https://github.com/ansible/ansible
77,047
please add/elaborate when variables need to be wrapped in jinja delimiters
### Summary I'm studying Red Hat's RH294 and ansible in general. One thing that often confuses me is when to warp variables in jinja delimiters `{{ }}` and when not. [playbooks_variables.rst ](https://docs.ansible.com/ansible/2.9/user_guide/playbooks_variables.html#using-variables-with-jinja2) says: > Once you’ve defined variables, you can use them in your playbooks using the Jinja2 templating system. Here’s a simple Jinja2 template: > > `My amp goes to {{ max_amp_value }}` But many modules/conditionals have implicit jinja wrapping and fail when a variable is wrapped in `{{ }}`. The [builtin `debug` module's `var` parameter](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/debug_module.html#parameter-var) is one of the few places so far where I found this to be documented explicitly. The [`when` clause](https://docs.ansible.com/ansible/latest/user_guide/playbooks_conditionals.html) is another example where it is documented that it is a jinja expression and does not require wrapping. However: `failed_when`, `changed_when` also require variables to used as-is but that is not documented at [conditionals](https://docs.ansible.com/ansible/latest/user_guide/playbooks_error_handling.html) AFAICT. ### Issue Type Documentation Report ### Component Name playbooks_variables.rst ### Ansible Version ```console n/a ``` ### Configuration ```console n/a ``` ### OS / Environment n/a ### Additional Information Adding an explanation that some modules/variables/statements run in a jinja context and others don't and how this affects the need to wrap variables would help newcomers create correct playbooks quicker. Adding a similar note about the jinja context to playbooks_error_handling.rst as seen on playbooks_conditionals.rst would be helpful as well. ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77047
https://github.com/ansible/ansible/pull/77051
e620b96f49c6b343963a01f1ff6c5419869df2df
84b85a5b5a53a56c460bf4b68b5126fd2ccdc03a
2022-02-17T09:49:40Z
python
2022-02-17T15:28:12Z
docs/docsite/rst/user_guide/playbooks_error_handling.rst
.. _playbooks_error_handling: *************************** Error handling in playbooks *************************** When Ansible receives a non-zero return code from a command or a failure from a module, by default it stops executing on that host and continues on other hosts. However, in some circumstances you may want different behavior. Sometimes a non-zero return code indicates success. Sometimes you want a failure on one host to stop execution on all hosts. Ansible provides tools and settings to handle these situations and help you get the behavior, output, and reporting you want. .. contents:: :local: .. _ignoring_failed_commands: Ignoring failed commands ======================== By default Ansible stops executing tasks on a host when a task fails on that host. You can use ``ignore_errors`` to continue on in spite of the failure. .. code-block:: yaml - name: Do not count this as a failure ansible.builtin.command: /bin/false ignore_errors: yes The ``ignore_errors`` directive only works when the task is able to run and returns a value of 'failed'. It does not make Ansible ignore undefined variable errors, connection failures, execution issues (for example, missing packages), or syntax errors. .. _ignore_unreachable: Ignoring unreachable host errors ================================ .. versionadded:: 2.7 You can ignore a task failure due to the host instance being 'UNREACHABLE' with the ``ignore_unreachable`` keyword. Ansible ignores the task errors, but continues to execute future tasks against the unreachable host. For example, at the task level: .. code-block:: yaml - name: This executes, fails, and the failure is ignored ansible.builtin.command: /bin/true ignore_unreachable: yes - name: This executes, fails, and ends the play for this host ansible.builtin.command: /bin/true And at the playbook level: .. code-block:: yaml - hosts: all ignore_unreachable: yes tasks: - name: This executes, fails, and the failure is ignored ansible.builtin.command: /bin/true - name: This executes, fails, and ends the play for this host ansible.builtin.command: /bin/true ignore_unreachable: no .. _resetting_unreachable: Resetting unreachable hosts =========================== If Ansible cannot connect to a host, it marks that host as 'UNREACHABLE' and removes it from the list of active hosts for the run. You can use `meta: clear_host_errors` to reactivate all hosts, so subsequent tasks can try to reach them again. .. _handlers_and_failure: Handlers and failure ==================== Ansible runs :ref:`handlers <handlers>` at the end of each play. If a task notifies a handler but another task fails later in the play, by default the handler does *not* run on that host, which may leave the host in an unexpected state. For example, a task could update a configuration file and notify a handler to restart some service. If a task later in the same play fails, the configuration file might be changed but the service will not be restarted. You can change this behavior with the ``--force-handlers`` command-line option, by including ``force_handlers: True`` in a play, or by adding ``force_handlers = True`` to ansible.cfg. When handlers are forced, Ansible will run all notified handlers on all hosts, even hosts with failed tasks. (Note that certain errors could still prevent the handler from running, such as a host becoming unreachable.) .. _controlling_what_defines_failure: Defining failure ================ Ansible lets you define what "failure" means in each task using the ``failed_when`` conditional. As with all conditionals in Ansible, lists of multiple ``failed_when`` conditions are joined with an implicit ``and``, meaning the task only fails when *all* conditions are met. If you want to trigger a failure when any of the conditions is met, you must define the conditions in a string with an explicit ``or`` operator. You may check for failure by searching for a word or phrase in the output of a command .. code-block:: yaml - name: Fail task when the command error output prints FAILED ansible.builtin.command: /usr/bin/example-command -x -y -z register: command_result failed_when: "'FAILED' in command_result.stderr" or based on the return code .. code-block:: yaml - name: Fail task when both files are identical ansible.builtin.raw: diff foo/file1 bar/file2 register: diff_cmd failed_when: diff_cmd.rc == 0 or diff_cmd.rc >= 2 You can also combine multiple conditions for failure. This task will fail if both conditions are true: .. code-block:: yaml - name: Check if a file exists in temp and fail task if it does ansible.builtin.command: ls /tmp/this_should_not_be_here register: result failed_when: - result.rc == 0 - '"No such" not in result.stdout' If you want the task to fail when only one condition is satisfied, change the ``failed_when`` definition to .. code-block:: yaml failed_when: result.rc == 0 or "No such" not in result.stdout If you have too many conditions to fit neatly into one line, you can split it into a multi-line YAML value with ``>``. .. code-block:: yaml - name: example of many failed_when conditions with OR ansible.builtin.shell: "./myBinary" register: ret failed_when: > ("No such file or directory" in ret.stdout) or (ret.stderr != '') or (ret.rc == 10) .. _override_the_changed_result: Defining "changed" ================== Ansible lets you define when a particular task has "changed" a remote node using the ``changed_when`` conditional. This lets you determine, based on return codes or output, whether a change should be reported in Ansible statistics and whether a handler should be triggered or not. As with all conditionals in Ansible, lists of multiple ``changed_when`` conditions are joined with an implicit ``and``, meaning the task only reports a change when *all* conditions are met. If you want to report a change when any of the conditions is met, you must define the conditions in a string with an explicit ``or`` operator. For example: .. code-block:: yaml tasks: - name: Report 'changed' when the return code is not equal to 2 ansible.builtin.shell: /usr/bin/billybass --mode="take me to the river" register: bass_result changed_when: "bass_result.rc != 2" - name: This will never report 'changed' status ansible.builtin.shell: wall 'beep' changed_when: False You can also combine multiple conditions to override "changed" result. .. code-block:: yaml - name: Combine multiple conditions to override 'changed' result ansible.builtin.command: /bin/fake_command register: result ignore_errors: True changed_when: - '"ERROR" in result.stderr' - result.rc == 2 See :ref:`controlling_what_defines_failure` for more conditional syntax examples. Ensuring success for command and shell ====================================== The :ref:`command <command_module>` and :ref:`shell <shell_module>` modules care about return codes, so if you have a command whose successful exit code is not zero, you can do this: .. code-block:: yaml tasks: - name: Run this command and ignore the result ansible.builtin.shell: /usr/bin/somecommand || /bin/true Aborting a play on all hosts ============================ Sometimes you want a failure on a single host, or failures on a certain percentage of hosts, to abort the entire play on all hosts. You can stop play execution after the first failure happens with ``any_errors_fatal``. For finer-grained control, you can use ``max_fail_percentage`` to abort the run after a given percentage of hosts has failed. Aborting on the first error: any_errors_fatal --------------------------------------------- If you set ``any_errors_fatal`` and a task returns an error, Ansible finishes the fatal task on all hosts in the current batch, then stops executing the play on all hosts. Subsequent tasks and plays are not executed. You can recover from fatal errors by adding a :ref:`rescue section <block_error_handling>` to the block. You can set ``any_errors_fatal`` at the play or block level. .. code-block:: yaml - hosts: somehosts any_errors_fatal: true roles: - myrole - hosts: somehosts tasks: - block: - include_tasks: mytasks.yml any_errors_fatal: true You can use this feature when all tasks must be 100% successful to continue playbook execution. For example, if you run a service on machines in multiple data centers with load balancers to pass traffic from users to the service, you want all load balancers to be disabled before you stop the service for maintenance. To ensure that any failure in the task that disables the load balancers will stop all other tasks: .. code-block:: yaml --- - hosts: load_balancers_dc_a any_errors_fatal: true tasks: - name: Shut down datacenter 'A' ansible.builtin.command: /usr/bin/disable-dc - hosts: frontends_dc_a tasks: - name: Stop service ansible.builtin.command: /usr/bin/stop-software - name: Update software ansible.builtin.command: /usr/bin/upgrade-software - hosts: load_balancers_dc_a tasks: - name: Start datacenter 'A' ansible.builtin.command: /usr/bin/enable-dc In this example Ansible starts the software upgrade on the front ends only if all of the load balancers are successfully disabled. .. _maximum_failure_percentage: Setting a maximum failure percentage ------------------------------------ By default, Ansible continues to execute tasks as long as there are hosts that have not yet failed. In some situations, such as when executing a rolling update, you may want to abort the play when a certain threshold of failures has been reached. To achieve this, you can set a maximum failure percentage on a play: .. code-block:: yaml --- - hosts: webservers max_fail_percentage: 30 serial: 10 The ``max_fail_percentage`` setting applies to each batch when you use it with :ref:`serial <rolling_update_batch_size>`. In the example above, if more than 3 of the 10 servers in the first (or any) batch of servers failed, the rest of the play would be aborted. .. note:: The percentage set must be exceeded, not equaled. For example, if serial were set to 4 and you wanted the task to abort the play when 2 of the systems failed, set the max_fail_percentage at 49 rather than 50. Controlling errors in blocks ============================ You can also use blocks to define responses to task errors. This approach is similar to exception handling in many programming languages. See :ref:`block_error_handling` for details and examples. .. seealso:: :ref:`playbooks_intro` An introduction to playbooks :ref:`playbooks_best_practices` Tips and tricks for playbooks :ref:`playbooks_conditionals` Conditional statements in playbooks :ref:`playbooks_variables` All about variables `User Mailing List <https://groups.google.com/group/ansible-devel>`_ Have a question? Stop by the google group! :ref:`communication_irc` How to join Ansible chat channels
closed
ansible/ansible
https://github.com/ansible/ansible
77,079
Some tests are skipped due to duplicate names
**SUMMARY** Test names must be unique per scope otherwise the second test overrides the first one with the same name. **ISSUE TYPE** - Bugfix Pull Request **COMPONENT NAME** ansible/test/units/parsing/test_mod_args.py ansible/test/units/module_utils/facts/test_collectors.py ansible/test/units/module_utils/common/validation/test_check_required_if.py ansible/test/units/module_utils/basic/test_deprecate_warn.py **ADDITIONAL INFORMATION** For example if you had a test file that does: ``` def test_a(): pass def test_a(): pass ``` Then only the second `test_a` will be ran. More details [here](https://codereview.doctor/features/python/best-practice/avoid-duplicate-unit-test-names). These are the tests that are overriding previously defined tests due to this problem: https://github.com/ansible/ansible/blob/de9a3bda2cfaace7e3d25b0c4774eefdd9514687/test/units/parsing/test_mod_args.py#L121 https://github.com/ansible/ansible/blob/de9a3bda2cfaace7e3d25b0c4774eefdd9514687/test/units/module_utils/facts/test_collectors.py#L397 https://github.com/ansible/ansible/blob/de9a3bda2cfaace7e3d25b0c4774eefdd9514687/test/units/module_utils/facts/test_collectors.py#L413 https://github.com/ansible/ansible/blob/de9a3bda2cfaace7e3d25b0c4774eefdd9514687/test/units/module_utils/common/validation/test_check_required_if.py#L56 https://github.com/ansible/ansible/blob/de9a3bda2cfaace7e3d25b0c4774eefdd9514687/test/units/module_utils/basic/test_deprecate_warn.py#L74 I found this issue automatically, see other issues [here](https://codereview.doctor/ansible/ansible)
https://github.com/ansible/ansible/issues/77079
https://github.com/ansible/ansible/pull/77115
0bd8106d15ed35ba3f1869010721ba958c01158f
2cd6cdc6a74beb2413383837ffc25cdd902264e8
2022-02-21T09:32:00Z
python
2022-02-23T00:41:57Z
test/units/module_utils/basic/test_deprecate_warn.py
# -*- coding: utf-8 -*- # # Copyright (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import json import pytest from ansible.module_utils.common import warnings @pytest.mark.parametrize('stdin', [{}], indirect=['stdin']) def test_warn(am, capfd): am.warn('warning1') with pytest.raises(SystemExit): am.exit_json(warnings=['warning2']) out, err = capfd.readouterr() assert json.loads(out)['warnings'] == ['warning1', 'warning2'] @pytest.mark.parametrize('stdin', [{}], indirect=['stdin']) def test_deprecate(am, capfd, monkeypatch): monkeypatch.setattr(warnings, '_global_deprecations', []) am.deprecate('deprecation1') am.deprecate('deprecation2', '2.3') # pylint: disable=ansible-deprecated-no-collection-name am.deprecate('deprecation3', version='2.4') # pylint: disable=ansible-deprecated-no-collection-name am.deprecate('deprecation4', date='2020-03-10') # pylint: disable=ansible-deprecated-no-collection-name am.deprecate('deprecation5', collection_name='ansible.builtin') am.deprecate('deprecation6', '2.3', collection_name='ansible.builtin') am.deprecate('deprecation7', version='2.4', collection_name='ansible.builtin') am.deprecate('deprecation8', date='2020-03-10', collection_name='ansible.builtin') with pytest.raises(SystemExit): am.exit_json(deprecations=['deprecation9', ('deprecation10', '2.4')]) out, err = capfd.readouterr() output = json.loads(out) assert ('warnings' not in output or output['warnings'] == []) assert output['deprecations'] == [ {u'msg': u'deprecation1', u'version': None, u'collection_name': None}, {u'msg': u'deprecation2', u'version': '2.3', u'collection_name': None}, {u'msg': u'deprecation3', u'version': '2.4', u'collection_name': None}, {u'msg': u'deprecation4', u'date': '2020-03-10', u'collection_name': None}, {u'msg': u'deprecation5', u'version': None, u'collection_name': 'ansible.builtin'}, {u'msg': u'deprecation6', u'version': '2.3', u'collection_name': 'ansible.builtin'}, {u'msg': u'deprecation7', u'version': '2.4', u'collection_name': 'ansible.builtin'}, {u'msg': u'deprecation8', u'date': '2020-03-10', u'collection_name': 'ansible.builtin'}, {u'msg': u'deprecation9', u'version': None, u'collection_name': None}, {u'msg': u'deprecation10', u'version': '2.4', u'collection_name': None}, ] @pytest.mark.parametrize('stdin', [{}], indirect=['stdin']) def test_deprecate_without_list(am, capfd): with pytest.raises(SystemExit): am.exit_json(deprecations='Simple deprecation warning') out, err = capfd.readouterr() output = json.loads(out) assert ('warnings' not in output or output['warnings'] == []) assert output['deprecations'] == [ {u'msg': u'Simple deprecation warning', u'version': None, u'collection_name': None}, ] @pytest.mark.parametrize('stdin', [{}], indirect=['stdin']) def test_deprecate_without_list(am, capfd): with pytest.raises(AssertionError) as ctx: am.deprecate('Simple deprecation warning', date='', version='') assert ctx.value.args[0] == "implementation error -- version and date must not both be set"
closed
ansible/ansible
https://github.com/ansible/ansible
77,079
Some tests are skipped due to duplicate names
**SUMMARY** Test names must be unique per scope otherwise the second test overrides the first one with the same name. **ISSUE TYPE** - Bugfix Pull Request **COMPONENT NAME** ansible/test/units/parsing/test_mod_args.py ansible/test/units/module_utils/facts/test_collectors.py ansible/test/units/module_utils/common/validation/test_check_required_if.py ansible/test/units/module_utils/basic/test_deprecate_warn.py **ADDITIONAL INFORMATION** For example if you had a test file that does: ``` def test_a(): pass def test_a(): pass ``` Then only the second `test_a` will be ran. More details [here](https://codereview.doctor/features/python/best-practice/avoid-duplicate-unit-test-names). These are the tests that are overriding previously defined tests due to this problem: https://github.com/ansible/ansible/blob/de9a3bda2cfaace7e3d25b0c4774eefdd9514687/test/units/parsing/test_mod_args.py#L121 https://github.com/ansible/ansible/blob/de9a3bda2cfaace7e3d25b0c4774eefdd9514687/test/units/module_utils/facts/test_collectors.py#L397 https://github.com/ansible/ansible/blob/de9a3bda2cfaace7e3d25b0c4774eefdd9514687/test/units/module_utils/facts/test_collectors.py#L413 https://github.com/ansible/ansible/blob/de9a3bda2cfaace7e3d25b0c4774eefdd9514687/test/units/module_utils/common/validation/test_check_required_if.py#L56 https://github.com/ansible/ansible/blob/de9a3bda2cfaace7e3d25b0c4774eefdd9514687/test/units/module_utils/basic/test_deprecate_warn.py#L74 I found this issue automatically, see other issues [here](https://codereview.doctor/ansible/ansible)
https://github.com/ansible/ansible/issues/77079
https://github.com/ansible/ansible/pull/77115
0bd8106d15ed35ba3f1869010721ba958c01158f
2cd6cdc6a74beb2413383837ffc25cdd902264e8
2022-02-21T09:32:00Z
python
2022-02-23T00:41:57Z
test/units/module_utils/common/validation/test_check_required_if.py
# -*- coding: utf-8 -*- # Copyright: (c) 2021, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type import pytest from ansible.module_utils._text import to_native from ansible.module_utils.common.validation import check_required_if def test_check_required_if(): arguments_terms = {} params = {} assert check_required_if(arguments_terms, params) == [] def test_check_required_if_missing(): arguments_terms = [["state", "present", ("path",)]] params = {"state": "present"} expected = "state is present but all of the following are missing: path" with pytest.raises(TypeError) as e: check_required_if(arguments_terms, params) assert to_native(e.value) == expected def test_check_required_if_missing_required(): arguments_terms = [["state", "present", ("path", "owner"), True]] params = {"state": "present"} expected = "state is present but any of the following are missing: path, owner" with pytest.raises(TypeError) as e: check_required_if(arguments_terms, params) assert to_native(e.value) == expected def test_check_required_if_missing_multiple(): arguments_terms = [["state", "present", ("path", "owner")]] params = { "state": "present", } expected = "state is present but all of the following are missing: path, owner" with pytest.raises(TypeError) as e: check_required_if(arguments_terms, params) assert to_native(e.value) == expected def test_check_required_if_missing_multiple(): arguments_terms = [["state", "present", ("path", "owner")]] params = { "state": "present", } options_context = ["foo_context"] expected = "state is present but all of the following are missing: path, owner found in foo_context" with pytest.raises(TypeError) as e: check_required_if(arguments_terms, params, options_context) assert to_native(e.value) == expected def test_check_required_if_multiple(): arguments_terms = [["state", "present", ("path", "owner")]] params = { "state": "present", "path": "/foo", "owner": "root", } options_context = ["foo_context"] assert check_required_if(arguments_terms, params) == [] assert check_required_if(arguments_terms, params, options_context) == []
closed
ansible/ansible
https://github.com/ansible/ansible
77,079
Some tests are skipped due to duplicate names
**SUMMARY** Test names must be unique per scope otherwise the second test overrides the first one with the same name. **ISSUE TYPE** - Bugfix Pull Request **COMPONENT NAME** ansible/test/units/parsing/test_mod_args.py ansible/test/units/module_utils/facts/test_collectors.py ansible/test/units/module_utils/common/validation/test_check_required_if.py ansible/test/units/module_utils/basic/test_deprecate_warn.py **ADDITIONAL INFORMATION** For example if you had a test file that does: ``` def test_a(): pass def test_a(): pass ``` Then only the second `test_a` will be ran. More details [here](https://codereview.doctor/features/python/best-practice/avoid-duplicate-unit-test-names). These are the tests that are overriding previously defined tests due to this problem: https://github.com/ansible/ansible/blob/de9a3bda2cfaace7e3d25b0c4774eefdd9514687/test/units/parsing/test_mod_args.py#L121 https://github.com/ansible/ansible/blob/de9a3bda2cfaace7e3d25b0c4774eefdd9514687/test/units/module_utils/facts/test_collectors.py#L397 https://github.com/ansible/ansible/blob/de9a3bda2cfaace7e3d25b0c4774eefdd9514687/test/units/module_utils/facts/test_collectors.py#L413 https://github.com/ansible/ansible/blob/de9a3bda2cfaace7e3d25b0c4774eefdd9514687/test/units/module_utils/common/validation/test_check_required_if.py#L56 https://github.com/ansible/ansible/blob/de9a3bda2cfaace7e3d25b0c4774eefdd9514687/test/units/module_utils/basic/test_deprecate_warn.py#L74 I found this issue automatically, see other issues [here](https://codereview.doctor/ansible/ansible)
https://github.com/ansible/ansible/issues/77079
https://github.com/ansible/ansible/pull/77115
0bd8106d15ed35ba3f1869010721ba958c01158f
2cd6cdc6a74beb2413383837ffc25cdd902264e8
2022-02-21T09:32:00Z
python
2022-02-23T00:41:57Z
test/units/module_utils/facts/test_collectors.py
# unit tests for ansible fact collectors # -*- coding: utf-8 -*- # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type from units.compat.mock import Mock, patch from . base import BaseFactsTest from ansible.module_utils.facts import collector from ansible.module_utils.facts.system.apparmor import ApparmorFactCollector from ansible.module_utils.facts.system.caps import SystemCapabilitiesFactCollector from ansible.module_utils.facts.system.cmdline import CmdLineFactCollector from ansible.module_utils.facts.system.distribution import DistributionFactCollector from ansible.module_utils.facts.system.dns import DnsFactCollector from ansible.module_utils.facts.system.env import EnvFactCollector from ansible.module_utils.facts.system.fips import FipsFactCollector from ansible.module_utils.facts.system.pkg_mgr import PkgMgrFactCollector, OpenBSDPkgMgrFactCollector from ansible.module_utils.facts.system.platform import PlatformFactCollector from ansible.module_utils.facts.system.python import PythonFactCollector from ansible.module_utils.facts.system.selinux import SelinuxFactCollector from ansible.module_utils.facts.system.service_mgr import ServiceMgrFactCollector from ansible.module_utils.facts.system.ssh_pub_keys import SshPubKeyFactCollector from ansible.module_utils.facts.system.user import UserFactCollector from ansible.module_utils.facts.virtual.base import VirtualCollector from ansible.module_utils.facts.network.base import NetworkCollector from ansible.module_utils.facts.hardware.base import HardwareCollector class CollectorException(Exception): pass class ExceptionThrowingCollector(collector.BaseFactCollector): name = 'exc_throwing' def __init__(self, collectors=None, namespace=None, exception=None): super(ExceptionThrowingCollector, self).__init__(collectors, namespace) self._exception = exception or CollectorException('collection failed') def collect(self, module=None, collected_facts=None): raise self._exception class TestExceptionThrowingCollector(BaseFactsTest): __test__ = True gather_subset = ['exc_throwing'] valid_subsets = ['exc_throwing'] collector_class = ExceptionThrowingCollector def test_collect(self): module = self._mock_module() fact_collector = self.collector_class() self.assertRaises(CollectorException, fact_collector.collect, module=module, collected_facts=self.collected_facts) def test_collect_with_namespace(self): module = self._mock_module() fact_collector = self.collector_class() self.assertRaises(CollectorException, fact_collector.collect_with_namespace, module=module, collected_facts=self.collected_facts) class TestApparmorFacts(BaseFactsTest): __test__ = True gather_subset = ['!all', 'apparmor'] valid_subsets = ['apparmor'] fact_namespace = 'ansible_apparmor' collector_class = ApparmorFactCollector def test_collect(self): facts_dict = super(TestApparmorFacts, self).test_collect() self.assertIn('status', facts_dict['apparmor']) class TestCapsFacts(BaseFactsTest): __test__ = True gather_subset = ['!all', 'caps'] valid_subsets = ['caps'] fact_namespace = 'ansible_system_capabilities' collector_class = SystemCapabilitiesFactCollector def _mock_module(self): mock_module = Mock() mock_module.params = {'gather_subset': self.gather_subset, 'gather_timeout': 10, 'filter': '*'} mock_module.get_bin_path = Mock(return_value='/usr/sbin/capsh') mock_module.run_command = Mock(return_value=(0, 'Current: =ep', '')) return mock_module class TestCmdLineFacts(BaseFactsTest): __test__ = True gather_subset = ['!all', 'cmdline'] valid_subsets = ['cmdline'] fact_namespace = 'ansible_cmdline' collector_class = CmdLineFactCollector def test_parse_proc_cmdline_uefi(self): uefi_cmdline = r'initrd=\70ef65e1a04a47aea04f7b5145ea3537\4.10.0-19-generic\initrd root=UUID=50973b75-4a66-4bf0-9764-2b7614489e64 ro quiet' expected = {'initrd': r'\70ef65e1a04a47aea04f7b5145ea3537\4.10.0-19-generic\initrd', 'root': 'UUID=50973b75-4a66-4bf0-9764-2b7614489e64', 'quiet': True, 'ro': True} fact_collector = self.collector_class() facts_dict = fact_collector._parse_proc_cmdline(uefi_cmdline) self.assertDictEqual(facts_dict, expected) def test_parse_proc_cmdline_fedora(self): cmdline_fedora = r'BOOT_IMAGE=/vmlinuz-4.10.16-200.fc25.x86_64 root=/dev/mapper/fedora-root ro rd.lvm.lv=fedora/root rd.luks.uuid=luks-c80b7537-358b-4a07-b88c-c59ef187479b rd.lvm.lv=fedora/swap rhgb quiet LANG=en_US.UTF-8' # noqa expected = {'BOOT_IMAGE': '/vmlinuz-4.10.16-200.fc25.x86_64', 'LANG': 'en_US.UTF-8', 'quiet': True, 'rd.luks.uuid': 'luks-c80b7537-358b-4a07-b88c-c59ef187479b', 'rd.lvm.lv': 'fedora/swap', 'rhgb': True, 'ro': True, 'root': '/dev/mapper/fedora-root'} fact_collector = self.collector_class() facts_dict = fact_collector._parse_proc_cmdline(cmdline_fedora) self.assertDictEqual(facts_dict, expected) def test_parse_proc_cmdline_dup_console(self): example = r'BOOT_IMAGE=/boot/vmlinuz-4.4.0-72-generic root=UUID=e12e46d9-06c9-4a64-a7b3-60e24b062d90 ro console=tty1 console=ttyS0' # FIXME: Two 'console' keywords? Using a dict for the fact value here loses info. Currently the 'last' one wins expected = {'BOOT_IMAGE': '/boot/vmlinuz-4.4.0-72-generic', 'root': 'UUID=e12e46d9-06c9-4a64-a7b3-60e24b062d90', 'ro': True, 'console': 'ttyS0'} fact_collector = self.collector_class() facts_dict = fact_collector._parse_proc_cmdline(example) # TODO: fails because we lose a 'console' self.assertDictEqual(facts_dict, expected) class TestDistributionFacts(BaseFactsTest): __test__ = True gather_subset = ['!all', 'distribution'] valid_subsets = ['distribution'] fact_namespace = 'ansible_distribution' collector_class = DistributionFactCollector class TestDnsFacts(BaseFactsTest): __test__ = True gather_subset = ['!all', 'dns'] valid_subsets = ['dns'] fact_namespace = 'ansible_dns' collector_class = DnsFactCollector class TestEnvFacts(BaseFactsTest): __test__ = True gather_subset = ['!all', 'env'] valid_subsets = ['env'] fact_namespace = 'ansible_env' collector_class = EnvFactCollector def test_collect(self): facts_dict = super(TestEnvFacts, self).test_collect() self.assertIn('HOME', facts_dict['env']) class TestFipsFacts(BaseFactsTest): __test__ = True gather_subset = ['!all', 'fips'] valid_subsets = ['fips'] fact_namespace = 'ansible_fips' collector_class = FipsFactCollector class TestHardwareCollector(BaseFactsTest): __test__ = True gather_subset = ['!all', 'hardware'] valid_subsets = ['hardware'] fact_namespace = 'ansible_hardware' collector_class = HardwareCollector collected_facts = {'ansible_architecture': 'x86_64'} class TestNetworkCollector(BaseFactsTest): __test__ = True gather_subset = ['!all', 'network'] valid_subsets = ['network'] fact_namespace = 'ansible_network' collector_class = NetworkCollector class TestPkgMgrFacts(BaseFactsTest): __test__ = True gather_subset = ['!all', 'pkg_mgr'] valid_subsets = ['pkg_mgr'] fact_namespace = 'ansible_pkgmgr' collector_class = PkgMgrFactCollector collected_facts = { "ansible_distribution": "Fedora", "ansible_distribution_major_version": "28", "ansible_os_family": "RedHat" } def test_collect(self): module = self._mock_module() fact_collector = self.collector_class() facts_dict = fact_collector.collect(module=module, collected_facts=self.collected_facts) self.assertIsInstance(facts_dict, dict) self.assertIn('pkg_mgr', facts_dict) class TestMacOSXPkgMgrFacts(BaseFactsTest): __test__ = True gather_subset = ['!all', 'pkg_mgr'] valid_subsets = ['pkg_mgr'] fact_namespace = 'ansible_pkgmgr' collector_class = PkgMgrFactCollector collected_facts = { "ansible_distribution": "MacOSX", "ansible_distribution_major_version": "11", "ansible_os_family": "Darwin" } @patch('ansible.module_utils.facts.system.pkg_mgr.os.path.exists', side_effect=lambda x: x == '/opt/homebrew/bin/brew') def test_collect_opt_homebrew(self, p_exists): module = self._mock_module() fact_collector = self.collector_class() facts_dict = fact_collector.collect(module=module, collected_facts=self.collected_facts) self.assertIsInstance(facts_dict, dict) self.assertIn('pkg_mgr', facts_dict) self.assertEqual(facts_dict['pkg_mgr'], 'homebrew') @patch('ansible.module_utils.facts.system.pkg_mgr.os.path.exists', side_effect=lambda x: x == '/usr/local/bin/brew') def test_collect_usr_homebrew(self, p_exists): module = self._mock_module() fact_collector = self.collector_class() facts_dict = fact_collector.collect(module=module, collected_facts=self.collected_facts) self.assertIsInstance(facts_dict, dict) self.assertIn('pkg_mgr', facts_dict) self.assertEqual(facts_dict['pkg_mgr'], 'homebrew') @patch('ansible.module_utils.facts.system.pkg_mgr.os.path.exists', side_effect=lambda x: x == '/opt/local/bin/port') def test_collect_macports(self, p_exists): module = self._mock_module() fact_collector = self.collector_class() facts_dict = fact_collector.collect(module=module, collected_facts=self.collected_facts) self.assertIsInstance(facts_dict, dict) self.assertIn('pkg_mgr', facts_dict) self.assertEqual(facts_dict['pkg_mgr'], 'macports') def _sanitize_os_path_apt_get(path): if path == '/usr/bin/apt-get': return True else: return False class TestPkgMgrFactsAptFedora(BaseFactsTest): __test__ = True gather_subset = ['!all', 'pkg_mgr'] valid_subsets = ['pkg_mgr'] fact_namespace = 'ansible_pkgmgr' collector_class = PkgMgrFactCollector collected_facts = { "ansible_distribution": "Fedora", "ansible_distribution_major_version": "28", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "apt" } @patch('ansible.module_utils.facts.system.pkg_mgr.os.path.exists', side_effect=_sanitize_os_path_apt_get) def test_collect(self, mock_os_path_exists): module = self._mock_module() fact_collector = self.collector_class() facts_dict = fact_collector.collect(module=module, collected_facts=self.collected_facts) self.assertIsInstance(facts_dict, dict) self.assertIn('pkg_mgr', facts_dict) class TestOpenBSDPkgMgrFacts(BaseFactsTest): __test__ = True gather_subset = ['!all', 'pkg_mgr'] valid_subsets = ['pkg_mgr'] fact_namespace = 'ansible_pkgmgr' collector_class = OpenBSDPkgMgrFactCollector def test_collect(self): module = self._mock_module() fact_collector = self.collector_class() facts_dict = fact_collector.collect(module=module, collected_facts=self.collected_facts) self.assertIsInstance(facts_dict, dict) self.assertIn('pkg_mgr', facts_dict) self.assertEqual(facts_dict['pkg_mgr'], 'openbsd_pkg') class TestPlatformFactCollector(BaseFactsTest): __test__ = True gather_subset = ['!all', 'platform'] valid_subsets = ['platform'] fact_namespace = 'ansible_platform' collector_class = PlatformFactCollector class TestPythonFactCollector(BaseFactsTest): __test__ = True gather_subset = ['!all', 'python'] valid_subsets = ['python'] fact_namespace = 'ansible_python' collector_class = PythonFactCollector class TestSelinuxFacts(BaseFactsTest): __test__ = True gather_subset = ['!all', 'selinux'] valid_subsets = ['selinux'] fact_namespace = 'ansible_selinux' collector_class = SelinuxFactCollector def test_no_selinux(self): with patch('ansible.module_utils.facts.system.selinux.HAVE_SELINUX', False): module = self._mock_module() fact_collector = self.collector_class() facts_dict = fact_collector.collect(module=module) self.assertIsInstance(facts_dict, dict) self.assertEqual(facts_dict['selinux']['status'], 'Missing selinux Python library') return facts_dict class TestServiceMgrFacts(BaseFactsTest): __test__ = True gather_subset = ['!all', 'service_mgr'] valid_subsets = ['service_mgr'] fact_namespace = 'ansible_service_mgr' collector_class = ServiceMgrFactCollector # TODO: dedupe some of this test code @patch('ansible.module_utils.facts.system.service_mgr.get_file_content', return_value=None) @patch('ansible.module_utils.facts.system.service_mgr.ServiceMgrFactCollector.is_systemd_managed', return_value=False) @patch('ansible.module_utils.facts.system.service_mgr.ServiceMgrFactCollector.is_systemd_managed_offline', return_value=False) @patch('ansible.module_utils.facts.system.service_mgr.os.path.exists', return_value=False) def test_service_mgr_runit(self, mock_gfc, mock_ism, mock_ismo, mock_ope): # no /proc/1/comm, ps returns non-0 # should fallback to 'service' module = self._mock_module() module.run_command = Mock(return_value=(1, '', 'wat')) fact_collector = self.collector_class() facts_dict = fact_collector.collect(module=module) self.assertIsInstance(facts_dict, dict) self.assertEqual(facts_dict['service_mgr'], 'service') @patch('ansible.module_utils.facts.system.service_mgr.get_file_content', return_value=None) def test_no_proc1_ps_random_init(self, mock_gfc): # no /proc/1/comm, ps returns '/sbin/sys11' which we dont know # should end up return 'sys11' module = self._mock_module() module.run_command = Mock(return_value=(0, '/sbin/sys11', '')) fact_collector = self.collector_class() facts_dict = fact_collector.collect(module=module) self.assertIsInstance(facts_dict, dict) self.assertEqual(facts_dict['service_mgr'], 'sys11') @patch('ansible.module_utils.facts.system.service_mgr.get_file_content', return_value=None) @patch('ansible.module_utils.facts.system.service_mgr.ServiceMgrFactCollector.is_systemd_managed', return_value=False) @patch('ansible.module_utils.facts.system.service_mgr.ServiceMgrFactCollector.is_systemd_managed_offline', return_value=False) @patch('ansible.module_utils.facts.system.service_mgr.os.path.exists', return_value=False) def test_service_mgr_runit(self, mock_gfc, mock_ism, mock_ismo, mock_ope): # no /proc/1/comm, ps fails, distro and system are clowncar # should end up return 'sys11' module = self._mock_module() module.run_command = Mock(return_value=(1, '', '')) collected_facts = {'distribution': 'clowncar', 'system': 'ClownCarOS'} fact_collector = self.collector_class() facts_dict = fact_collector.collect(module=module, collected_facts=collected_facts) self.assertIsInstance(facts_dict, dict) self.assertEqual(facts_dict['service_mgr'], 'service') @patch('ansible.module_utils.facts.system.service_mgr.get_file_content', return_value='runit-init') @patch('ansible.module_utils.facts.system.service_mgr.os.path.islink', side_effect=lambda x: x == '/sbin/init') @patch('ansible.module_utils.facts.system.service_mgr.os.readlink', side_effect=lambda x: '/sbin/runit-init' if x == '/sbin/init' else '/bin/false') def test_service_mgr_runit(self, mock_gfc, mock_opl, mock_orl): # /proc/1/comm contains 'runit-init', ps fails, service manager is runit # should end up return 'runit' module = self._mock_module() module.run_command = Mock(return_value=(1, '', '')) collected_facts = {'ansible_system': 'Linux'} fact_collector = self.collector_class() facts_dict = fact_collector.collect(module=module, collected_facts=collected_facts) self.assertIsInstance(facts_dict, dict) self.assertEqual(facts_dict['service_mgr'], 'runit') @patch('ansible.module_utils.facts.system.service_mgr.get_file_content', return_value=None) @patch('ansible.module_utils.facts.system.service_mgr.os.path.islink', side_effect=lambda x: x == '/sbin/init') @patch('ansible.module_utils.facts.system.service_mgr.os.readlink', side_effect=lambda x: '/sbin/runit-init' if x == '/sbin/init' else '/bin/false') def test_service_mgr_runit_no_comm(self, mock_gfc, mock_opl, mock_orl): # no /proc/1/comm, ps returns 'COMMAND\n', service manager is runit # should end up return 'runit' module = self._mock_module() module.run_command = Mock(return_value=(1, 'COMMAND\n', '')) collected_facts = {'ansible_system': 'Linux'} fact_collector = self.collector_class() facts_dict = fact_collector.collect(module=module, collected_facts=collected_facts) self.assertIsInstance(facts_dict, dict) self.assertEqual(facts_dict['service_mgr'], 'runit') # TODO: reenable these tests when we can mock more easily # @patch('ansible.module_utils.facts.system.service_mgr.get_file_content', return_value=None) # def test_sunos_fallback(self, mock_gfc): # # no /proc/1/comm, ps fails, 'system' is SunOS # # should end up return 'smf'? # module = self._mock_module() # # FIXME: the result here is a kluge to at least cover more of service_mgr.collect # # TODO: remove # # FIXME: have to force a pid for results here to get into any of the system/distro checks # module.run_command = Mock(return_value=(1, ' 37 ', '')) # collected_facts = {'system': 'SunOS'} # fact_collector = self.collector_class(module=module) # facts_dict = fact_collector.collect(collected_facts=collected_facts) # print('facts_dict: %s' % facts_dict) # self.assertIsInstance(facts_dict, dict) # self.assertEqual(facts_dict['service_mgr'], 'smf') # @patch('ansible.module_utils.facts.system.service_mgr.get_file_content', return_value=None) # def test_aix_fallback(self, mock_gfc): # # no /proc/1/comm, ps fails, 'system' is SunOS # # should end up return 'smf'? # module = self._mock_module() # module.run_command = Mock(return_value=(1, '', '')) # collected_facts = {'system': 'AIX'} # fact_collector = self.collector_class(module=module) # facts_dict = fact_collector.collect(collected_facts=collected_facts) # print('facts_dict: %s' % facts_dict) # self.assertIsInstance(facts_dict, dict) # self.assertEqual(facts_dict['service_mgr'], 'src') # @patch('ansible.module_utils.facts.system.service_mgr.get_file_content', return_value=None) # def test_linux_fallback(self, mock_gfc): # # no /proc/1/comm, ps fails, 'system' is SunOS # # should end up return 'smf'? # module = self._mock_module() # module.run_command = Mock(return_value=(1, ' 37 ', '')) # collected_facts = {'system': 'Linux'} # fact_collector = self.collector_class(module=module) # facts_dict = fact_collector.collect(collected_facts=collected_facts) # print('facts_dict: %s' % facts_dict) # self.assertIsInstance(facts_dict, dict) # self.assertEqual(facts_dict['service_mgr'], 'sdfadf') class TestSshPubKeyFactCollector(BaseFactsTest): __test__ = True gather_subset = ['!all', 'ssh_pub_keys'] valid_subsets = ['ssh_pub_keys'] fact_namespace = 'ansible_ssh_pub_leys' collector_class = SshPubKeyFactCollector class TestUserFactCollector(BaseFactsTest): __test__ = True gather_subset = ['!all', 'user'] valid_subsets = ['user'] fact_namespace = 'ansible_user' collector_class = UserFactCollector class TestVirtualFacts(BaseFactsTest): __test__ = True gather_subset = ['!all', 'virtual'] valid_subsets = ['virtual'] fact_namespace = 'ansible_virtual' collector_class = VirtualCollector
closed
ansible/ansible
https://github.com/ansible/ansible
77,079
Some tests are skipped due to duplicate names
**SUMMARY** Test names must be unique per scope otherwise the second test overrides the first one with the same name. **ISSUE TYPE** - Bugfix Pull Request **COMPONENT NAME** ansible/test/units/parsing/test_mod_args.py ansible/test/units/module_utils/facts/test_collectors.py ansible/test/units/module_utils/common/validation/test_check_required_if.py ansible/test/units/module_utils/basic/test_deprecate_warn.py **ADDITIONAL INFORMATION** For example if you had a test file that does: ``` def test_a(): pass def test_a(): pass ``` Then only the second `test_a` will be ran. More details [here](https://codereview.doctor/features/python/best-practice/avoid-duplicate-unit-test-names). These are the tests that are overriding previously defined tests due to this problem: https://github.com/ansible/ansible/blob/de9a3bda2cfaace7e3d25b0c4774eefdd9514687/test/units/parsing/test_mod_args.py#L121 https://github.com/ansible/ansible/blob/de9a3bda2cfaace7e3d25b0c4774eefdd9514687/test/units/module_utils/facts/test_collectors.py#L397 https://github.com/ansible/ansible/blob/de9a3bda2cfaace7e3d25b0c4774eefdd9514687/test/units/module_utils/facts/test_collectors.py#L413 https://github.com/ansible/ansible/blob/de9a3bda2cfaace7e3d25b0c4774eefdd9514687/test/units/module_utils/common/validation/test_check_required_if.py#L56 https://github.com/ansible/ansible/blob/de9a3bda2cfaace7e3d25b0c4774eefdd9514687/test/units/module_utils/basic/test_deprecate_warn.py#L74 I found this issue automatically, see other issues [here](https://codereview.doctor/ansible/ansible)
https://github.com/ansible/ansible/issues/77079
https://github.com/ansible/ansible/pull/77115
0bd8106d15ed35ba3f1869010721ba958c01158f
2cd6cdc6a74beb2413383837ffc25cdd902264e8
2022-02-21T09:32:00Z
python
2022-02-23T00:41:57Z
test/units/parsing/test_mod_args.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # Copyright 2017, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type import pytest import re from ansible.errors import AnsibleParserError from ansible.parsing.mod_args import ModuleArgsParser from ansible.utils.sentinel import Sentinel class TestModArgsDwim: # TODO: add tests that construct ModuleArgsParser with a task reference # TODO: verify the AnsibleError raised on failure knows the task # and the task knows the line numbers INVALID_MULTIPLE_ACTIONS = ( ({'action': 'shell echo hi', 'local_action': 'shell echo hi'}, "action and local_action are mutually exclusive"), ({'action': 'shell echo hi', 'shell': 'echo hi'}, "conflicting action statements: shell, shell"), ({'local_action': 'shell echo hi', 'shell': 'echo hi'}, "conflicting action statements: shell, shell"), ) def _debug(self, mod, args, to): print("RETURNED module = {0}".format(mod)) print(" args = {0}".format(args)) print(" to = {0}".format(to)) def test_basic_shell(self): m = ModuleArgsParser(dict(shell='echo hi')) mod, args, to = m.parse() self._debug(mod, args, to) assert mod == 'shell' assert args == dict( _raw_params='echo hi', ) assert to is Sentinel def test_basic_command(self): m = ModuleArgsParser(dict(command='echo hi')) mod, args, to = m.parse() self._debug(mod, args, to) assert mod == 'command' assert args == dict( _raw_params='echo hi', ) assert to is Sentinel def test_shell_with_modifiers(self): m = ModuleArgsParser(dict(shell='/bin/foo creates=/tmp/baz removes=/tmp/bleep')) mod, args, to = m.parse() self._debug(mod, args, to) assert mod == 'shell' assert args == dict( creates='/tmp/baz', removes='/tmp/bleep', _raw_params='/bin/foo', ) assert to is Sentinel def test_normal_usage(self): m = ModuleArgsParser(dict(copy='src=a dest=b')) mod, args, to = m.parse() self._debug(mod, args, to) assert mod, 'copy' assert args, dict(src='a', dest='b') assert to is Sentinel def test_complex_args(self): m = ModuleArgsParser(dict(copy=dict(src='a', dest='b'))) mod, args, to = m.parse() self._debug(mod, args, to) assert mod, 'copy' assert args, dict(src='a', dest='b') assert to is Sentinel def test_action_with_complex(self): m = ModuleArgsParser(dict(action=dict(module='copy', src='a', dest='b'))) mod, args, to = m.parse() self._debug(mod, args, to) assert mod == 'copy' assert args == dict(src='a', dest='b') assert to is Sentinel def test_action_with_complex_and_complex_args(self): m = ModuleArgsParser(dict(action=dict(module='copy', args=dict(src='a', dest='b')))) mod, args, to = m.parse() self._debug(mod, args, to) assert mod == 'copy' assert args == dict(src='a', dest='b') assert to is Sentinel def test_local_action_string(self): m = ModuleArgsParser(dict(local_action='copy src=a dest=b')) mod, args, delegate_to = m.parse() self._debug(mod, args, delegate_to) assert mod == 'copy' assert args == dict(src='a', dest='b') assert delegate_to == 'localhost' @pytest.mark.parametrize("args_dict, msg", INVALID_MULTIPLE_ACTIONS) def test_multiple_actions(self, args_dict, msg): m = ModuleArgsParser(args_dict) with pytest.raises(AnsibleParserError) as err: m.parse() assert err.value.args[0] == msg def test_multiple_actions(self): args_dict = {'ping': 'data=hi', 'shell': 'echo hi'} m = ModuleArgsParser(args_dict) with pytest.raises(AnsibleParserError) as err: m.parse() assert err.value.args[0].startswith("conflicting action statements: ") actions = set(re.search(r'(\w+), (\w+)', err.value.args[0]).groups()) assert actions == set(['ping', 'shell']) def test_bogus_action(self): args_dict = {'bogusaction': {}} m = ModuleArgsParser(args_dict) with pytest.raises(AnsibleParserError) as err: m.parse() assert err.value.args[0].startswith("couldn't resolve module/action 'bogusaction'")
closed
ansible/ansible
https://github.com/ansible/ansible
77,085
`deprecated` pylint plugin bug
### Summary ``` 00:55 File "/root/.ansible/test/venv/sanity.pylint/3.10/1d079f27/lib/python3.10/site-packages/pylint/utils/ast_walker.py", line 74, in walk 00:55 callback(astroid) 00:55 File "/root/ansible/test/lib/ansible_test/_util/controller/sanity/pylint/plugins/deprecated.py", line 252, in visit_call 00:55 self._check_version(node, version, collection_name) 00:55 File "/root/ansible/test/lib/ansible_test/_util/controller/sanity/pylint/plugins/deprecated.py", line 175, in _check_version 00:55 self.add_message('invalid-version', node=node, args=(version,)) 00:55 File "/root/.ansible/test/venv/sanity.pylint/3.10/1d079f27/lib/python3.10/site-packages/pylint/checkers/base_checker.py", line 111, in add_message 00:55 self.linter.add_message(msgid, line, node, args, confidence, col_offset) 00:55 File "/root/.ansible/test/venv/sanity.pylint/3.10/1d079f27/lib/python3.10/site-packages/pylint/message/message_handler_mix_in.py", line 222, in add_message 00:55 message_definitions = self.msgs_store.get_message_definitions(msgid) 00:55 File "/root/.ansible/test/venv/sanity.pylint/3.10/1d079f27/lib/python3.10/site-packages/pylint/message/message_definition_store.py", line 50, in get_message_definitions 00:55 for m in self.message_id_store.get_active_msgids(msgid_or_symbol) 00:55 File "/root/.ansible/test/venv/sanity.pylint/3.10/1d079f27/lib/python3.10/site-packages/pylint/message/message_id_store.py", line 116, in get_active_msgids 00:55 raise UnknownMessageError(error_msg) 00:55 pylint.exceptions.UnknownMessageError: No such message id or symbol 'invalid-version'. ``` `invalid-version` is not defined in that plugin. Additionally, we probably need to handle the issue where the `version` is a reference to a variable that cannot be resolved via astroid: ``` display.deprecated(msg, version=deprecation[1]['version']) ``` ### Issue Type Bug Report ### Component Name test/lib/ansible_test/_util/controller/sanity/pylint/plugins/deprecated.py ### Ansible Version ```console $ ansible --version ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment N/A ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) ``` ### Expected Results N/A ### Actual Results ```console N/A ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77085
https://github.com/ansible/ansible/pull/77086
143904f49b806322b1ae95b5b53057f644bf9665
bdf37336c867ef97dffe32fb5e23a82adb37b899
2022-02-21T18:28:29Z
python
2022-02-23T20:42:45Z
changelogs/fragments/77086-correct-pylint-symbols.yml
closed
ansible/ansible
https://github.com/ansible/ansible
77,085
`deprecated` pylint plugin bug
### Summary ``` 00:55 File "/root/.ansible/test/venv/sanity.pylint/3.10/1d079f27/lib/python3.10/site-packages/pylint/utils/ast_walker.py", line 74, in walk 00:55 callback(astroid) 00:55 File "/root/ansible/test/lib/ansible_test/_util/controller/sanity/pylint/plugins/deprecated.py", line 252, in visit_call 00:55 self._check_version(node, version, collection_name) 00:55 File "/root/ansible/test/lib/ansible_test/_util/controller/sanity/pylint/plugins/deprecated.py", line 175, in _check_version 00:55 self.add_message('invalid-version', node=node, args=(version,)) 00:55 File "/root/.ansible/test/venv/sanity.pylint/3.10/1d079f27/lib/python3.10/site-packages/pylint/checkers/base_checker.py", line 111, in add_message 00:55 self.linter.add_message(msgid, line, node, args, confidence, col_offset) 00:55 File "/root/.ansible/test/venv/sanity.pylint/3.10/1d079f27/lib/python3.10/site-packages/pylint/message/message_handler_mix_in.py", line 222, in add_message 00:55 message_definitions = self.msgs_store.get_message_definitions(msgid) 00:55 File "/root/.ansible/test/venv/sanity.pylint/3.10/1d079f27/lib/python3.10/site-packages/pylint/message/message_definition_store.py", line 50, in get_message_definitions 00:55 for m in self.message_id_store.get_active_msgids(msgid_or_symbol) 00:55 File "/root/.ansible/test/venv/sanity.pylint/3.10/1d079f27/lib/python3.10/site-packages/pylint/message/message_id_store.py", line 116, in get_active_msgids 00:55 raise UnknownMessageError(error_msg) 00:55 pylint.exceptions.UnknownMessageError: No such message id or symbol 'invalid-version'. ``` `invalid-version` is not defined in that plugin. Additionally, we probably need to handle the issue where the `version` is a reference to a variable that cannot be resolved via astroid: ``` display.deprecated(msg, version=deprecation[1]['version']) ``` ### Issue Type Bug Report ### Component Name test/lib/ansible_test/_util/controller/sanity/pylint/plugins/deprecated.py ### Ansible Version ```console $ ansible --version ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment N/A ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) ``` ### Expected Results N/A ### Actual Results ```console N/A ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77085
https://github.com/ansible/ansible/pull/77086
143904f49b806322b1ae95b5b53057f644bf9665
bdf37336c867ef97dffe32fb5e23a82adb37b899
2022-02-21T18:28:29Z
python
2022-02-23T20:42:45Z
test/lib/ansible_test/_util/controller/sanity/pylint/plugins/deprecated.py
"""Ansible specific plyint plugin for checking deprecations.""" # (c) 2018, Matt Martz <[email protected]> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # -*- coding: utf-8 -*- from __future__ import annotations import datetime import re import astroid from pylint.interfaces import IAstroidChecker from pylint.checkers import BaseChecker from pylint.checkers.utils import check_messages from ansible.module_utils.compat.version import LooseVersion from ansible.module_utils.six import string_types from ansible.release import __version__ as ansible_version_raw from ansible.utils.version import SemanticVersion MSGS = { 'E9501': ("Deprecated version (%r) found in call to Display.deprecated " "or AnsibleModule.deprecate", "ansible-deprecated-version", "Used when a call to Display.deprecated specifies a version " "less than or equal to the current version of Ansible", {'minversion': (2, 6)}), 'E9502': ("Display.deprecated call without a version or date", "ansible-deprecated-no-version", "Used when a call to Display.deprecated does not specify a " "version or date", {'minversion': (2, 6)}), 'E9503': ("Invalid deprecated version (%r) found in call to " "Display.deprecated or AnsibleModule.deprecate", "ansible-invalid-deprecated-version", "Used when a call to Display.deprecated specifies an invalid " "Ansible version number", {'minversion': (2, 6)}), 'E9504': ("Deprecated version (%r) found in call to Display.deprecated " "or AnsibleModule.deprecate", "collection-deprecated-version", "Used when a call to Display.deprecated specifies a collection " "version less than or equal to the current version of this " "collection", {'minversion': (2, 6)}), 'E9505': ("Invalid deprecated version (%r) found in call to " "Display.deprecated or AnsibleModule.deprecate", "collection-invalid-deprecated-version", "Used when a call to Display.deprecated specifies an invalid " "collection version number", {'minversion': (2, 6)}), 'E9506': ("No collection name found in call to Display.deprecated or " "AnsibleModule.deprecate", "ansible-deprecated-no-collection-name", "The current collection name in format `namespace.name` must " "be provided as collection_name when calling Display.deprecated " "or AnsibleModule.deprecate (`ansible.builtin` for ansible-core)", {'minversion': (2, 6)}), 'E9507': ("Wrong collection name (%r) found in call to " "Display.deprecated or AnsibleModule.deprecate", "wrong-collection-deprecated", "The name of the current collection must be passed to the " "Display.deprecated resp. AnsibleModule.deprecate calls " "(`ansible.builtin` for ansible-core)", {'minversion': (2, 6)}), 'E9508': ("Expired date (%r) found in call to Display.deprecated " "or AnsibleModule.deprecate", "ansible-deprecated-date", "Used when a call to Display.deprecated specifies a date " "before today", {'minversion': (2, 6)}), 'E9509': ("Invalid deprecated date (%r) found in call to " "Display.deprecated or AnsibleModule.deprecate", "ansible-invalid-deprecated-date", "Used when a call to Display.deprecated specifies an invalid " "date. It must be a string in format `YYYY-MM-DD` (ISO 8601)", {'minversion': (2, 6)}), 'E9510': ("Both version and date found in call to " "Display.deprecated or AnsibleModule.deprecate", "ansible-deprecated-both-version-and-date", "Only one of version and date must be specified", {'minversion': (2, 6)}), 'E9511': ("Removal version (%r) must be a major release, not a minor or " "patch release (see the specification at https://semver.org/)", "removal-version-must-be-major", "Used when a call to Display.deprecated or " "AnsibleModule.deprecate for a collection specifies a version " "which is not of the form x.0.0", {'minversion': (2, 6)}), } ANSIBLE_VERSION = LooseVersion('.'.join(ansible_version_raw.split('.')[:3])) def _get_expr_name(node): """Funciton to get either ``attrname`` or ``name`` from ``node.func.expr`` Created specifically for the case of ``display.deprecated`` or ``self._display.deprecated`` """ try: return node.func.expr.attrname except AttributeError: # If this fails too, we'll let it raise, the caller should catch it return node.func.expr.name def parse_isodate(value): """Parse an ISO 8601 date string.""" msg = 'Expected ISO 8601 date string (YYYY-MM-DD)' if not isinstance(value, string_types): raise ValueError(msg) # From Python 3.7 in, there is datetime.date.fromisoformat(). For older versions, # we have to do things manually. if not re.match('^[0-9]{4}-[0-9]{2}-[0-9]{2}$', value): raise ValueError(msg) try: return datetime.datetime.strptime(value, '%Y-%m-%d').date() except ValueError: raise ValueError(msg) class AnsibleDeprecatedChecker(BaseChecker): """Checks for Display.deprecated calls to ensure that the ``version`` has not passed or met the time for removal """ __implements__ = (IAstroidChecker,) name = 'deprecated' msgs = MSGS options = ( ('collection-name', { 'default': None, 'type': 'string', 'metavar': '<name>', 'help': 'The collection\'s name used to check collection names in deprecations.', }), ('collection-version', { 'default': None, 'type': 'string', 'metavar': '<version>', 'help': 'The collection\'s version number used to check deprecations.', }), ) def __init__(self, *args, **kwargs): self.collection_version = None self.collection_name = None super().__init__(*args, **kwargs) def set_option(self, optname, value, action=None, optdict=None): super().set_option(optname, value, action, optdict) if optname == 'collection-version' and value is not None: self.collection_version = SemanticVersion(self.config.collection_version) if optname == 'collection-name' and value is not None: self.collection_name = self.config.collection_name def _check_date(self, node, date): if not isinstance(date, str): self.add_message('invalid-date', node=node, args=(date,)) return try: date_parsed = parse_isodate(date) except ValueError: self.add_message('ansible-invalid-deprecated-date', node=node, args=(date,)) return if date_parsed < datetime.date.today(): self.add_message('ansible-deprecated-date', node=node, args=(date,)) def _check_version(self, node, version, collection_name): if not isinstance(version, (str, float)): self.add_message('invalid-version', node=node, args=(version,)) return version_no = str(version) if collection_name == 'ansible.builtin': # Ansible-base try: if not version_no: raise ValueError('Version string should not be empty') loose_version = LooseVersion(str(version_no)) if ANSIBLE_VERSION >= loose_version: self.add_message('ansible-deprecated-version', node=node, args=(version,)) except ValueError: self.add_message('ansible-invalid-deprecated-version', node=node, args=(version,)) elif collection_name: # Collections try: if not version_no: raise ValueError('Version string should not be empty') semantic_version = SemanticVersion(version_no) if collection_name == self.collection_name and self.collection_version is not None: if self.collection_version >= semantic_version: self.add_message('collection-deprecated-version', node=node, args=(version,)) if semantic_version.major != 0 and (semantic_version.minor != 0 or semantic_version.patch != 0): self.add_message('removal-version-must-be-major', node=node, args=(version,)) except ValueError: self.add_message('collection-invalid-deprecated-version', node=node, args=(version,)) @check_messages(*(MSGS.keys())) def visit_call(self, node): """Visit a call node.""" version = None date = None collection_name = None try: if (node.func.attrname == 'deprecated' and 'display' in _get_expr_name(node) or node.func.attrname == 'deprecate' and _get_expr_name(node)): if node.keywords: for keyword in node.keywords: if len(node.keywords) == 1 and keyword.arg is None: # This is likely a **kwargs splat return if keyword.arg == 'version': if isinstance(keyword.value.value, astroid.Name): # This is likely a variable return version = keyword.value.value if keyword.arg == 'date': if isinstance(keyword.value.value, astroid.Name): # This is likely a variable return date = keyword.value.value if keyword.arg == 'collection_name': if isinstance(keyword.value.value, astroid.Name): # This is likely a variable return collection_name = keyword.value.value if not version and not date: try: version = node.args[1].value except IndexError: self.add_message('ansible-deprecated-no-version', node=node) return if version and date: self.add_message('ansible-deprecated-both-version-and-date', node=node) if collection_name: this_collection = collection_name == (self.collection_name or 'ansible.builtin') if not this_collection: self.add_message('wrong-collection-deprecated', node=node, args=(collection_name,)) elif self.collection_name is not None: self.add_message('ansible-deprecated-no-collection-name', node=node) if date: self._check_date(node, date) elif version: self._check_version(node, version, collection_name) except AttributeError: # Not the type of node we are interested in pass def register(linter): """required method to auto register this checker """ linter.register_checker(AnsibleDeprecatedChecker(linter))
closed
ansible/ansible
https://github.com/ansible/ansible
76,134
Documented install instructions for Ubuntu does not work
### Summary Followed the instructions at https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#installing-ansible-on-ubuntu installing on Ubuntu 20.4. Ran these commands per the documentation: ``` $ sudo apt update $ sudo apt install software-properties-common $ sudo add-apt-repository --yes --update ppa:ansible/ansible $ sudo apt install ansible ``` When I try to run any ansible command I get this error: ``` $ ansible --version ERROR! Unexpected Exception: No module named yaml the full traceback was: Traceback (most recent call last): File "/usr/local/bin/ansible", line 88, in <module> mycli = getattr(__import__("ansible.cli.%s" % sub, fromlist=[myclass]), myclass) File "/usr/local/lib/python2.7/dist-packages/ansible/cli/__init__.py", line 28, in <module> import yaml ImportError: No module named yaml ``` ### Issue Type Documentation Report ### Component Name ansible? ### Ansible Version ```console $ ansible --version $ ansible --version ERROR! Unexpected Exception: No module named yaml the full traceback was: Traceback (most recent call last): File "/usr/local/bin/ansible", line 88, in <module> mycli = getattr(__import__("ansible.cli.%s" % sub, fromlist=[myclass]), myclass) File "/usr/local/lib/python2.7/dist-packages/ansible/cli/__init__.py", line 28, in <module> import yaml ImportError: No module named yaml ``` ``` ### Configuration ```console $ ansible-config dump --only-changed No output ``` ### OS / Environment Ubuntu 20.4 ### Additional Information Need to improve documentation for Ubuntu 20.4 ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76134
https://github.com/ansible/ansible/pull/77137
0bb70b6b9f01a4de39d2824a0a5a91af339559d4
71dfe32ab664b1932d0fac051b2859d5b4fc2b95
2021-10-25T21:42:13Z
python
2022-02-24T14:29:06Z
docs/docsite/rst/installation_guide/intro_installation.rst
.. _installation_guide: .. _intro_installation_guide: ****************** Installing Ansible ****************** Ansible is an agentless automation tool that you install on a control node. From the control node, Ansible manages machines and other devices remotely (by default, over the SSH protocol). To install Ansible for use at the command line, simply install the Ansible package on one machine (which could easily be a laptop). You do not need to install a database or run any daemons. Ansible can manage an entire fleet of remote machines from that one control node. .. contents:: :local: Prerequisites ============= Before you install Ansible, review the requirements for a control node. Before you use Ansible, review the requirements for managed nodes (those end devices you want to automate). Control nodes and managed nodes have different minimum requirements. .. _control_node_requirements: Control node requirements ------------------------- For your control node (the machine that runs Ansible), you can use any machine with Python 3.8 or newer installed. This includes Red Hat, Debian, CentOS, macOS, any of the BSDs, and so on. Windows is not supported for the control node, read more about this in `Matt Davis's blog post <http://blog.rolpdog.com/2020/03/why-no-ansible-controller-for-windows.html>`_. .. warning:: Please note that some plugins that run on the control node have additional requirements. These requirements should be listed in the plugin documentation. When choosing a control node, remember that any management system benefits from being run near the machines being managed. If you are using Ansible to manage machines in a cloud, consider using a machine inside that cloud as your control node. In most cases Ansible will perform better from a machine on the cloud than from a machine on the open Internet. .. warning:: Ansible 2.11 will make Python 3.8 a soft dependency for the control node, but will function with the aforementioned requirements. Ansible 2.12 will require Python 3.8 or newer to function on the control node. Starting with Ansible 2.11, the project will only be packaged for Python 3.8 and newer. .. _managed_node_requirements: Managed node requirements ------------------------- Although you do not need a daemon on your managed nodes, you do need a way for Ansible to communicate with them. For most managed nodes, Ansible makes a connection over SSH and transfers modules using SFTP. If SSH works but SFTP is not available on some of your managed nodes, you can switch to SCP in :ref:`ansible.cfg <ansible_configuration_settings>`. For any machine or device that can run Python, you also need Python 2 (version 2.6 or later) or Python 3 (version 3.5 or later). .. warning:: Please note that some modules have additional requirements that need to be satisfied on the 'target' machine (the managed node). These requirements should be listed in the module documentation. .. note:: * If you have SELinux enabled on remote nodes, you will also want to install libselinux-python on them before using any copy/file/template related functions in Ansible. You can use the :ref:`yum module<yum_module>` or :ref:`dnf module<dnf_module>` in Ansible to install this package on remote systems that do not have it. * By default, before the first Python module in a playbook runs on a host, Ansible attempts to discover a suitable Python interpreter on that host. You can override the discovery behavior by setting the :ref:`ansible_python_interpreter<ansible_python_interpreter>` inventory variable to a specific interpreter, and in other ways. See :ref:`interpreter_discovery` for details. * Ansible's :ref:`raw module<raw_module>`, and the :ref:`script module<script_module>`, do not depend on a client side install of Python to run. Technically, you can use Ansible to install a compatible version of Python using the :ref:`raw module<raw_module>`, which then allows you to use everything else. For example, if you need to bootstrap Python 2 onto a RHEL-based system, you can install it as follows: .. code-block:: shell $ ansible myhost --become -m raw -a "yum install -y python2" .. _what_version: Selecting an Ansible artifact and version to install ==================================================== Starting with version 2.10, Ansible distributes two artifacts: a community package called ``ansible`` and a minimalist language and runtime called ``ansible-core`` (called `ansible-base` in version 2.10). Choose the Ansible artifact and version that matches your particular needs. Installing the Ansible community package ---------------------------------------- The ``ansible`` package includes the Ansible language and runtime plus a range of community curated Collections. It recreates and expands on the functionality that was included in Ansible 2.9. You can choose any of the following ways to install the Ansible community package: * Install the latest release with your OS package manager (for Red Hat Enterprise Linux (TM), CentOS, Fedora, Debian, or Ubuntu). * Install with ``pip`` (the Python package manager). .. _install_core: Installing `ansible-core` ------------------------- Ansible also distributes a minimalist object called ``ansible-core`` (or ``ansible-base`` in version 2.10). It contains the Ansible language, runtime, and a short list of core modules and other plugins. You can build functionality on top of ``ansible-core`` by installing collections from Galaxy, Automation Hub, or any other source. You can choose any of the following ways to install ``ansible-core``: * Install ``ansible-core`` (version 2.11 and greater) or ``ansible-base`` (version 2.10) with ``pip``. * Install ``ansible-core`` (version 2.11 and greater) RPM package with ``dnf``. * Install ``ansible-core`` from source from the ansible/ansible GitHub repository to access the development (``devel``) version to develop or test the latest features. .. note:: You should only run ``ansible-core`` from ``devel`` if you are modifying ``ansible-core``, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. Ansible generally creates new releases twice a year. See :ref:`release_and_maintenance` for information on release timing and maintenance of older releases. .. _from_pip: Installing and upgrading Ansible with ``pip`` ============================================= Ansible can be installed on many systems with ``pip``, the Python package manager. Prerequisites: Installing ``pip`` ---------------------------------- If ``pip`` is not already available on your system, run the following commands to install it:: $ curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py $ python get-pip.py --user You may need to perform some additional configuration before you are able to run Ansible. See the Python documentation on `installing to the user site`_ for more information. .. _installing to the user site: https://packaging.python.org/tutorials/installing-packages/#installing-to-the-user-site Installing Ansible with ``pip`` ------------------------------- .. note:: If you have Ansible 2.9 or older installed or Ansible 3, see :ref:`pip_upgrade`. Once ``pip`` is installed, you can install Ansible:: $ python -m pip install --user ansible In order to use the ``paramiko`` connection plugin or modules that require ``paramiko``, install the required module [1]_:: $ python -m pip install --user paramiko If you wish to install Ansible globally, run the following commands:: $ sudo python get-pip.py $ sudo python -m pip install ansible .. note:: Running ``pip`` with ``sudo`` will make global changes to the system. Since ``pip`` does not coordinate with system package managers, it could make changes to your system that leaves it in an inconsistent or non-functioning state. This is particularly true for macOS. Installing with ``--user`` is recommended unless you understand fully the implications of modifying global files on the system. .. note:: Older versions of ``pip`` default to http://pypi.python.org/simple, which no longer works. Please make sure you have the latest version of ``pip`` before installing Ansible. If you have an older version of ``pip`` installed, you can upgrade by following `pip's upgrade instructions <https://pip.pypa.io/en/stable/installing/#upgrading-pip>`_ . .. _from_pip_venv: Installing Ansible in a virtual environment with ``pip`` -------------------------------------------------------- .. note:: If you have Ansible 2.9 or older installed or Ansible 3, see :ref:`pip_upgrade`. Ansible can also be installed inside a new or existing ``virtualenv``:: $ python -m virtualenv ansible # Create a virtualenv if one does not already exist $ source ansible/bin/activate # Activate the virtual environment $ python -m pip install ansible .. _pip_upgrade: Upgrading Ansible with ``pip`` ------------------------------ Upgrading from 2.9 or earlier to 2.10 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Starting in version 2.10, Ansible is made of two packages. When you upgrade from version 2.9 and older to version 2.10 or later, you need to uninstall the old Ansible version (2.9 or earlier) before upgrading. If you do not uninstall the older version of Ansible, you will see the following message, and no change will be performed: .. code-block:: console Cannot install ansible-base with a pre-existing ansible==2.x installation. Installing ansible-base with ansible-2.9 or older currently installed with pip is known to cause problems. Please uninstall ansible and install the new version: pip uninstall ansible pip install ansible-base ... As explained by the message, to upgrade you must first remove the version of Ansible installed and then install it to the latest version. .. code-block:: console $ pip uninstall ansible $ pip install ansible Upgrading from Ansible 3 or ansible-core 2.10 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ``ansible-base`` only exists for version 2.10 and in Ansible 3. In 2.11 and later, the package is called ``ansible-core``. Before installing ``ansible-core`` or Ansible 4, you must uninstall ``ansible-base`` if you have installed Ansible 3 or ``ansible-base`` 2.10. To upgrade to ``ansible-core``: .. code-block:: bash pip uninstall ansible-base pip install ansible-core To upgrade to Ansible 4: .. code-block:: bash pip uninstall ansible-base pip install ansible .. _installing_the_control_node: .. _from_yum: Installing Ansible on specific operating systems ================================================ Follow these instructions to install the Ansible community package on a variety of operating systems. Installing Ansible on RHEL, CentOS, or Fedora ---------------------------------------------- On Fedora: .. code-block:: bash $ sudo dnf install ansible On RHEL: .. code-block:: bash $ sudo yum install ansible On CentOS: .. code-block:: bash $ sudo yum install epel-release $ sudo yum install ansible RPMs for currently supported versions of RHEL and CentOS are also available from `EPEL <https://fedoraproject.org/wiki/EPEL>`_. Ansible can manage older operating systems that contain Python 2.6 or higher. .. _from_apt: Installing Ansible on Ubuntu ---------------------------- Ubuntu builds are available `in a PPA here <https://launchpad.net/~ansible/+archive/ubuntu/ansible>`_. To configure the PPA on your machine and install Ansible run these commands: .. code-block:: bash $ sudo apt update $ sudo apt install software-properties-common $ sudo add-apt-repository --yes --update ppa:ansible/ansible $ sudo apt install ansible .. note:: On older Ubuntu distributions, "software-properties-common" is called "python-software-properties". You may want to use ``apt-get`` instead of ``apt`` in older versions. Also, be aware that only newer distributions (in other words, 18.04, 18.10, and so on) have a ``-u`` or ``--update`` flag, so adjust your script accordingly. Installing Ansible on Debian ---------------------------- Debian users may use the same source as the Ubuntu PPA (using the following table). .. list-table:: :header-rows: 1 * - Debian - - Ubuntu * - Debian 11 (Bullseye) - -> - Ubuntu 20.04 (Focal) * - Debian 10 (Buster) - -> - Ubuntu 18.04 (Bionic) * - Debian 9 (Stretch) - -> - Ubuntu 16.04 (Xenial) * - Debian 8 (Jessie) - -> - Ubuntu 14.04 (Trusty) .. note:: As of Ansible 4.0.0, new releases will only be generated for Ubuntu 18.04 (Bionic) or later releases. Add the following line to ``/etc/apt/sources.list`` or ``/etc/apt/sources.list.d/ansible.list``: .. code-block:: bash deb http://ppa.launchpad.net/ansible/ansible/ubuntu MATCHING_UBUNTU_CODENAME_HERE main Example for Debian 11 (Bullseye) .. code-block:: bash deb http://ppa.launchpad.net/ansible/ansible/ubuntu focal main Then run these commands: .. code-block:: bash $ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 93C4A3FD7BB9C367 $ sudo apt update $ sudo apt install ansible Installing Ansible on Gentoo with portage ----------------------------------------- .. code-block:: bash $ emerge -av app-admin/ansible To install the newest version, you may need to unmask the Ansible package prior to emerging: .. code-block:: bash $ echo 'app-admin/ansible' >> /etc/portage/package.accept_keywords Installing Ansible on FreeBSD ----------------------------- You can install Ansible on FreeBSD either from a package or from a port. See the FreeBSD handbook `Chapter 4. Installing Applications: Packages and Ports <https://docs.freebsd.org/en/books/handbook/ports/>`_. A best practice is to use the packages on the fresh installation of the system and both update and upgrade from the ports later. It's not recommended to mix the installation from packages and ports, that is, keep updating and upgrading from the ports. See the warning in the FreeBSD handbook `4.5. Using the Ports Collection <https://docs.freebsd.org/en/books/handbook/ports/#ports-using>`_. FreeBSD packages ^^^^^^^^^^^^^^^^ The installation from the packages is simpler, compared to the installation from the ports. See the details in the FreeBSD handbook `4.3. Finding Software <https://docs.freebsd.org/en/books/handbook/ports/#ports-finding-applications>`_ and `4.4. Using pkg for Binary Package Management <https://docs.freebsd.org/en/books/handbook/ports/#pkgng-intro>`_. Take a look at available packages, for example: .. code-block:: bash shell> pkg search ansible ansible-sshjail-1.1.0.35 Ansible connector for remote jails py38-ansible-4.7.0 Radically simple IT automation py38-ansible-base-2.10.15 Radically simple IT automation py38-ansible-core-2.11.6 Radically simple IT automation py38-ansible-iocage-g20200327,1 Ansible module for iocage py38-ansible-kld-g20200803,1 Ansible module to load kernel modules or update /boot/loader.conf py38-ansible-lint-5.3.2 Checks playbooks for sub-optimal practices and behaviour py38-ansible-runner-2.0.2 Extensible embeddable ansible job runner py38-ansible-sysrc-g20200803_1,1 Ansible module to set sysvars in rc.conf py38-ansible2-2.9.27 Radically simple IT automation Pick the flavor of the package (only py38 is available in the example above) and install the package (as a root, of course). .. code-block:: bash shell> pkg install py38-ansible The dependencies will be installed automatically after you approve them. For example, the installation of py38-ansible depends on the packages listed below .. code-block:: bash shell> pkg info -d py38-ansible py38-ansible-4.3.0: py38-ansible-core-2.11.3 python38-3.8.12 py38-setuptools-57.0.0 FreeBSD ports ^^^^^^^^^^^^^ The installation from the ports is more complex, compared to the installation from the packages, but flexible. See the details in the FreeBSD handbook `4.5. Using the Ports Collection <https://docs.freebsd.org/en/books/handbook/ports/#ports-using>`_. To install Ansible from a port change the directory and install the port (as a root, of course) .. code-block:: bash shell> cd /usr/ports/sysutils/ansible shell> make install clean .. note:: If you want to learn more about flavors see Porter's Handbook `Chapter 7. Flavors <https://docs.freebsd.org/en/books/porters-handbook/flavors/>`_. .. _on_macos: Installing Ansible on macOS --------------------------- The preferred way to install Ansible on a Mac is with ``pip``. The instructions can be found in :ref:`from_pip`. .. note:: macOS 12.3 removes the Python 2 installation. The official recommendation for installing Python on macOS for use by Ansible is to use the installer provided by `Python.org <https://www.python.org/downloads/macos/>`_. Alternately, you can choose to manually execute ``/usr/bin/python3`` provided along with macOS, and follow the instructions to install the Xcode developer tools. This is not listed as the official recommendation due to the extra dependencies. .. note:: If you have Ansible 2.9 or older installed or Ansible 3, see :ref:`pip_upgrade`. .. _from_pkgutil: Installing Ansible on Solaris ----------------------------- Ansible is available for Solaris as `SysV package from OpenCSW <https://www.opencsw.org/packages/ansible/>`_. .. code-block:: bash # pkgadd -d http://get.opencsw.org/now # /opt/csw/bin/pkgutil -i ansible .. _from_pacman: Installing Ansible on Arch Linux --------------------------------- Ansible is available in the Community repository:: $ pacman -S ansible The AUR has a PKGBUILD for pulling directly from GitHub called `ansible-core-git <https://aur.archlinux.org/packages/ansible-core-git>`_. Also see the `Ansible <https://wiki.archlinux.org/index.php/Ansible>`_ page on the ArchWiki. .. _from_sbopkg: Installing Ansible on Slackware Linux ------------------------------------- Ansible build script is available in the `SlackBuilds.org <https://slackbuilds.org/apps/ansible/>`_ repository. Can be built and installed using `sbopkg <https://sbopkg.org/>`_. Create queue with Ansible and all dependencies:: # sqg -p ansible Build and install packages from a created queuefile (answer Q for question if sbopkg should use queue or package):: # sbopkg -k -i ansible .. _from swupd: Installing Ansible on Clear Linux --------------------------------- Ansible and its dependencies are available as part of the sysadmin host management bundle:: $ sudo swupd bundle-add sysadmin-hostmgmt Update of the software will be managed by the swupd tool:: $ sudo swupd update .. _from_pip_devel: .. _getting_ansible: .. _from_windows: Installing Ansible on Windows ------------------------------ See :ref:`windows_faq_ansible` Installing and running the ``devel`` branch from source ======================================================= In Ansible 2.10 and later, the `ansible/ansible repository <https://github.com/ansible/ansible>`_ contains the code for basic features and functions, such as copying module code to managed nodes. This code is also known as ``ansible-core``. New features are added to ``ansible-core`` on a branch called ``devel``. If you are testing new features, fixing bugs, or otherwise working with the development team on changes to the core code, you can install and run ``devel``. .. note:: You should only install and run the ``devel`` branch if you are modifying ``ansible-core`` or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. .. note:: If you want to use Ansible AWX as the control node, do not install or run the ``devel`` branch of Ansible. Use an OS package manager (like ``apt`` or ``yum``) or ``pip`` to install a stable version. If you are running Ansible from source, you may also wish to follow the `Ansible GitHub project <https://github.com/ansible/ansible>`_. We track issues, document bugs, and share feature ideas in this and other related repositories. For more information on getting involved in the Ansible project, see the :ref:`ansible_community_guide`. For more information on creating Ansible modules and Collections, see the :ref:`developer_guide`. Installing ``devel`` from GitHub with ``pip`` --------------------------------------------- You can install the ``devel`` branch of ``ansible-core`` directly from GitHub with ``pip``: .. code-block:: bash $ python -m pip install --user https://github.com/ansible/ansible/archive/devel.tar.gz .. note:: If you have Ansible 2.9 or older installed or Ansible 3, see :ref:`pip_upgrade`. You can replace ``devel`` in the URL mentioned above, with any other branch or tag on GitHub to install older versions of Ansible (prior to ``ansible-base`` 2.10.), tagged alpha or beta versions, and release candidates. This installs all of Ansible. .. code-block:: bash $ python -m pip install --user https://github.com/ansible/ansible/archive/stable-2.9.tar.gz See :ref:`from_source` for instructions on how to run ``ansible-core`` directly from source. Installing ``devel`` from GitHub by cloning ------------------------------------------- You can install the ``devel`` branch of ``ansible-core`` by cloning the GitHub repository: .. code-block:: bash $ git clone https://github.com/ansible/ansible.git $ cd ./ansible The default branch is ``devel``. .. _from_source: Running the ``devel`` branch from a clone ----------------------------------------- ``ansible-core`` is easy to run from source. You do not need ``root`` permissions to use it and there is no software to actually install. No daemons or database setup are required. Once you have installed the ``ansible-core`` repository by cloning, setup the Ansible environment: Using Bash: .. code-block:: bash $ source ./hacking/env-setup Using Fish:: $ source ./hacking/env-setup.fish If you want to suppress spurious warnings/errors, use:: $ source ./hacking/env-setup -q If you do not have ``pip`` installed in your version of Python, install it:: $ curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py $ python get-pip.py --user Ansible also uses the following Python modules that need to be installed [1]_: .. code-block:: bash $ python -m pip install --user -r ./requirements.txt To update the ``devel`` branch of ``ansible-core`` on your local machine, use pull-with-rebase so any local changes are replayed. .. code-block:: bash $ git pull --rebase .. code-block:: bash $ git pull --rebase #same as above $ git submodule update --init --recursive After you run the the env-setup script, you will be running from the source code. The default inventory file will be ``/etc/ansible/hosts``. You can optionally specify an inventory file (see :ref:`inventory`) other than ``/etc/ansible/hosts``: .. code-block:: bash $ echo "127.0.0.1" > ~/ansible_hosts $ export ANSIBLE_INVENTORY=~/ansible_hosts You can read more about the inventory file at :ref:`inventory`. Confirming your installation ============================ Whatever method of installing Ansible you chose, you can test that it is installed correctly with a ping command: .. code-block:: bash $ ansible all -m ping --ask-pass You can also use "sudo make install". .. _tagged_releases: Finding tarballs of tagged releases =================================== If you are packaging Ansible or wanting to build a local package yourself, and you want to avoid a git checkout, you can use a tarball of a tagged release. You can download the latest stable release from PyPI's `ansible package page <https://pypi.org/project/ansible/>`_. If you need a specific older version, beta version, or release candidate, you can use the pattern ``pypi.python.org/packages/source/a/ansible/ansible-{{VERSION}}.tar.gz``. VERSION must be the full version number, for example 3.1.0 or 4.0.0b2. You can make VERSION a variable in your package managing system that you update in one place whenever you package a new version. .. note:: If you are creating your own Ansible package, you must also download or package ``ansible-core`` (or ``ansible-base`` for packages based on 2.10.x) from PyPI as part of your Ansible package. You must specify a particular version. Visit the PyPI project pages to download files for `ansible-core <https://pypi.org/project/ansible-core/>`_ or `ansible-base <https://pypi.org/project/ansible-base/>`_. These releases are also tagged in the `git repository <https://github.com/ansible/ansible/releases>`_ with the release version. .. _shell_completion: Adding Ansible command shell completion ======================================= As of Ansible 2.9, you can add shell completion of the Ansible command line utilities by installing an optional dependency called ``argcomplete``. ``argcomplete`` supports bash, and has limited support for zsh and tcsh. You can install ``python-argcomplete`` from EPEL on Red Hat Enterprise based distributions, and or from the standard OS repositories for many other distributions. For more information about installation and configuration, see the `argcomplete documentation <https://kislyuk.github.io/argcomplete/>`_. Installing ``argcomplete`` on RHEL, CentOS, or Fedora ----------------------------------------------------- On Fedora: .. code-block:: bash $ sudo dnf install python-argcomplete On RHEL and CentOS: .. code-block:: bash $ sudo yum install epel-release $ sudo yum install python-argcomplete Installing ``argcomplete`` with ``apt`` --------------------------------------- .. code-block:: bash $ sudo apt install python3-argcomplete Installing ``argcomplete`` with ``pip`` --------------------------------------- .. code-block:: bash $ python -m pip install argcomplete Configuring ``argcomplete`` --------------------------- There are 2 ways to configure ``argcomplete`` to allow shell completion of the Ansible command line utilities: globally or per command. Global configuration ^^^^^^^^^^^^^^^^^^^^ Global completion requires bash 4.2. .. code-block:: bash $ sudo activate-global-python-argcomplete This will write a bash completion file to a global location. Use ``--dest`` to change the location. Per command configuration ^^^^^^^^^^^^^^^^^^^^^^^^^ If you do not have bash 4.2, you must register each script independently. .. code-block:: bash $ eval $(register-python-argcomplete ansible) $ eval $(register-python-argcomplete ansible-config) $ eval $(register-python-argcomplete ansible-console) $ eval $(register-python-argcomplete ansible-doc) $ eval $(register-python-argcomplete ansible-galaxy) $ eval $(register-python-argcomplete ansible-inventory) $ eval $(register-python-argcomplete ansible-playbook) $ eval $(register-python-argcomplete ansible-pull) $ eval $(register-python-argcomplete ansible-vault) You should place the above commands into your shells profile file such as ``~/.profile`` or ``~/.bash_profile``. Using ``argcomplete`` with zsh or tcsh ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ See the `argcomplete documentation <https://kislyuk.github.io/argcomplete/>`_. .. seealso:: :ref:`intro_adhoc` Examples of basic commands :ref:`working_with_playbooks` Learning ansible's configuration management language :ref:`installation_faqs` Ansible Installation related to FAQs `Mailing List <https://groups.google.com/group/ansible-project>`_ Questions? Help? Ideas? Stop by the list on Google Groups :ref:`communication_irc` How to join Ansible chat channels .. [1] ``paramiko`` was included in Ansible's ``requirements.txt`` prior to 2.8.
closed
ansible/ansible
https://github.com/ansible/ansible
77,025
hostname module crashes with TypeError when FileStrategy is used
### Summary The `hostname` module crashes with a `TypeError` exception when the file-based strategy is used (e.g. when handling a Devuan host). ### Issue Type Bug Report ### Component Name hostname ### Ansible Version ```console $ ansible --version ansible [core 2.12.2] config file = /home/tseeker/ansible/main/ansible.cfg configured module search path = ['/home/tseeker/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/tseeker/.virtualenvs/ansible/lib/python3.9/site-packages/ansible ansible collection location = /home/tseeker/.ansible/collections:/usr/share/ansible/collections executable location = /home/tseeker/.virtualenvs/ansible/bin/ansible python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed DEFAULT_ASK_VAULT_PASS(/home/tseeker/ansible/main/ansible.cfg) = True DEFAULT_HOST_LIST(/home/tseeker/ansible/main/ansible.cfg) = ['/home/tseeker/ansible/main/inventory'] DEFAULT_JINJA2_EXTENSIONS(/home/tseeker/ansible/main/ansible.cfg) = jinja2.ext.do HOST_KEY_CHECKING(/home/tseeker/ansible/main/ansible.cfg) = False USE_PERSISTENT_CONNECTIONS(/home/tseeker/ansible/main/ansible.cfg) = True ``` ### OS / Environment Devuan Chimaera on both the host running Ansible and the target. ### Steps to Reproduce Tested with : ```bash ansible -m ansible.builtin.hostname -a 'name=target' target ``` Various values for `use` were attempted. ### Expected Results I expected the `hostname` to do exactly nothing as it was already configured on the machine in question. ### Actual Results ```console The plugin crashes with the following traceback : Traceback (most recent call last): File "/home/administrator/.ansible/tmp/ansible-tmp-1644935764.4447377-23987-132310327618278/AnsiballZ_hostname.py", line 259, in <module> _ansiballz_main() File "/home/administrator/.ansible/tmp/ansible-tmp-1644935764.4447377-23987-132310327618278/AnsiballZ_hostname.py", line 246, in _ansiballz_main exitcode = debug(sys.argv[1], zipped_mod, ANSIBALLZ_PARAMS) File "/home/administrator/.ansible/tmp/ansible-tmp-1644935764.4447377-23987-132310327618278/AnsiballZ_hostname.py", line 213, in debug runpy.run_module(mod_name='ansible.modules.hostname', init_globals=None, run_name='__main__', alter_sys=True) File "/usr/lib/python3.9/runpy.py", line 210, in run_module return _run_module_code(code, init_globals, run_name, mod_spec) File "/usr/lib/python3.9/runpy.py", line 97, in _run_module_code _run_code(code, mod_globals, init_globals, File "/usr/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/administrator/.ansible/tmp/ansible-tmp-1644935764.4447377-23987-132310327618278/debug_dir/ansible/modules/hostname.py", line 891, in <module> main() File "/home/administrator/.ansible/tmp/ansible-tmp-1644935764.4447377-23987-132310327618278/debug_dir/ansible/modules/hostname.py", line 885, in main 'before': 'hostname = ' + name_before + '\n'} TypeError: can only concatenate str (not "list") to str ``` This is caused by line 260 of `hostname.py`, which returns the array containing all the lines in whatever file `FileStrategy` is currently reading rather than just one line. ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77025
https://github.com/ansible/ansible/pull/77074
6a7009a84f550c5a9573c1aa1337c0f3960a2415
d60efd97687803fd184ac53aa691bd4e0ec43170
2022-02-15T15:00:59Z
python
2022-02-24T19:05:52Z
changelogs/fragments/77074-hostname-fix-typeerror-in-filestrategy.yml
closed
ansible/ansible
https://github.com/ansible/ansible
77,025
hostname module crashes with TypeError when FileStrategy is used
### Summary The `hostname` module crashes with a `TypeError` exception when the file-based strategy is used (e.g. when handling a Devuan host). ### Issue Type Bug Report ### Component Name hostname ### Ansible Version ```console $ ansible --version ansible [core 2.12.2] config file = /home/tseeker/ansible/main/ansible.cfg configured module search path = ['/home/tseeker/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/tseeker/.virtualenvs/ansible/lib/python3.9/site-packages/ansible ansible collection location = /home/tseeker/.ansible/collections:/usr/share/ansible/collections executable location = /home/tseeker/.virtualenvs/ansible/bin/ansible python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed DEFAULT_ASK_VAULT_PASS(/home/tseeker/ansible/main/ansible.cfg) = True DEFAULT_HOST_LIST(/home/tseeker/ansible/main/ansible.cfg) = ['/home/tseeker/ansible/main/inventory'] DEFAULT_JINJA2_EXTENSIONS(/home/tseeker/ansible/main/ansible.cfg) = jinja2.ext.do HOST_KEY_CHECKING(/home/tseeker/ansible/main/ansible.cfg) = False USE_PERSISTENT_CONNECTIONS(/home/tseeker/ansible/main/ansible.cfg) = True ``` ### OS / Environment Devuan Chimaera on both the host running Ansible and the target. ### Steps to Reproduce Tested with : ```bash ansible -m ansible.builtin.hostname -a 'name=target' target ``` Various values for `use` were attempted. ### Expected Results I expected the `hostname` to do exactly nothing as it was already configured on the machine in question. ### Actual Results ```console The plugin crashes with the following traceback : Traceback (most recent call last): File "/home/administrator/.ansible/tmp/ansible-tmp-1644935764.4447377-23987-132310327618278/AnsiballZ_hostname.py", line 259, in <module> _ansiballz_main() File "/home/administrator/.ansible/tmp/ansible-tmp-1644935764.4447377-23987-132310327618278/AnsiballZ_hostname.py", line 246, in _ansiballz_main exitcode = debug(sys.argv[1], zipped_mod, ANSIBALLZ_PARAMS) File "/home/administrator/.ansible/tmp/ansible-tmp-1644935764.4447377-23987-132310327618278/AnsiballZ_hostname.py", line 213, in debug runpy.run_module(mod_name='ansible.modules.hostname', init_globals=None, run_name='__main__', alter_sys=True) File "/usr/lib/python3.9/runpy.py", line 210, in run_module return _run_module_code(code, init_globals, run_name, mod_spec) File "/usr/lib/python3.9/runpy.py", line 97, in _run_module_code _run_code(code, mod_globals, init_globals, File "/usr/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/administrator/.ansible/tmp/ansible-tmp-1644935764.4447377-23987-132310327618278/debug_dir/ansible/modules/hostname.py", line 891, in <module> main() File "/home/administrator/.ansible/tmp/ansible-tmp-1644935764.4447377-23987-132310327618278/debug_dir/ansible/modules/hostname.py", line 885, in main 'before': 'hostname = ' + name_before + '\n'} TypeError: can only concatenate str (not "list") to str ``` This is caused by line 260 of `hostname.py`, which returns the array containing all the lines in whatever file `FileStrategy` is currently reading rather than just one line. ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77025
https://github.com/ansible/ansible/pull/77074
6a7009a84f550c5a9573c1aa1337c0f3960a2415
d60efd97687803fd184ac53aa691bd4e0ec43170
2022-02-15T15:00:59Z
python
2022-02-24T19:05:52Z
lib/ansible/modules/hostname.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright: (c) 2013, Hiroaki Nakamura <[email protected]> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type DOCUMENTATION = ''' --- module: hostname author: - Adrian Likins (@alikins) - Hideki Saito (@saito-hideki) version_added: "1.4" short_description: Manage hostname requirements: [ hostname ] description: - Set system's hostname. Supports most OSs/Distributions including those using C(systemd). - Windows, HP-UX, and AIX are not currently supported. notes: - This module does B(NOT) modify C(/etc/hosts). You need to modify it yourself using other modules such as M(ansible.builtin.template) or M(ansible.builtin.replace). - On macOS, this module uses C(scutil) to set C(HostName), C(ComputerName), and C(LocalHostName). Since C(LocalHostName) cannot contain spaces or most special characters, this module will replace characters when setting C(LocalHostName). options: name: description: - Name of the host. - If the value is a fully qualified domain name that does not resolve from the given host, this will cause the module to hang for a few seconds while waiting for the name resolution attempt to timeout. type: str required: true use: description: - Which strategy to use to update the hostname. - If not set we try to autodetect, but this can be problematic, particularly with containers as they can present misleading information. - Note that 'systemd' should be specified for RHEL/EL/CentOS 7+. Older distributions should use 'redhat'. choices: ['alpine', 'debian', 'freebsd', 'generic', 'macos', 'macosx', 'darwin', 'openbsd', 'openrc', 'redhat', 'sles', 'solaris', 'systemd'] type: str version_added: '2.9' extends_documentation_fragment: - action_common_attributes - action_common_attributes.facts attributes: check_mode: support: full diff_mode: support: full facts: support: full platform: platforms: posix ''' EXAMPLES = ''' - name: Set a hostname ansible.builtin.hostname: name: web01 - name: Set a hostname specifying strategy ansible.builtin.hostname: name: web01 use: systemd ''' import os import platform import socket import traceback from ansible.module_utils.basic import ( AnsibleModule, get_distribution, get_distribution_version, ) from ansible.module_utils.common.sys_info import get_platform_subclass from ansible.module_utils.facts.system.service_mgr import ServiceMgrFactCollector from ansible.module_utils.facts.utils import get_file_lines, get_file_content from ansible.module_utils._text import to_native, to_text from ansible.module_utils.six import PY3, text_type STRATS = { 'alpine': 'Alpine', 'debian': 'Systemd', 'freebsd': 'FreeBSD', 'generic': 'Base', 'macos': 'Darwin', 'macosx': 'Darwin', 'darwin': 'Darwin', 'openbsd': 'OpenBSD', 'openrc': 'OpenRC', 'redhat': 'RedHat', 'sles': 'SLES', 'solaris': 'Solaris', 'systemd': 'Systemd', } class UnimplementedStrategy(object): def __init__(self, module): self.module = module def update_current_and_permanent_hostname(self): self.unimplemented_error() def update_current_hostname(self): self.unimplemented_error() def update_permanent_hostname(self): self.unimplemented_error() def get_current_hostname(self): self.unimplemented_error() def set_current_hostname(self, name): self.unimplemented_error() def get_permanent_hostname(self): self.unimplemented_error() def set_permanent_hostname(self, name): self.unimplemented_error() def unimplemented_error(self): system = platform.system() distribution = get_distribution() if distribution is not None: msg_platform = '%s (%s)' % (system, distribution) else: msg_platform = system self.module.fail_json( msg='hostname module cannot be used on platform %s' % msg_platform) class Hostname(object): """ This is a generic Hostname manipulation class that is subclassed based on platform. A subclass may wish to set different strategy instance to self.strategy. All subclasses MUST define platform and distribution (which may be None). """ platform = 'Generic' distribution = None strategy_class = UnimplementedStrategy def __new__(cls, *args, **kwargs): new_cls = get_platform_subclass(Hostname) return super(cls, new_cls).__new__(new_cls) def __init__(self, module): self.module = module self.name = module.params['name'] self.use = module.params['use'] if self.use is not None: strat = globals()['%sStrategy' % STRATS[self.use]] self.strategy = strat(module) elif platform.system() == 'Linux' and ServiceMgrFactCollector.is_systemd_managed(module): # This is Linux and systemd is active self.strategy = SystemdStrategy(module) else: self.strategy = self.strategy_class(module) def update_current_and_permanent_hostname(self): return self.strategy.update_current_and_permanent_hostname() def get_current_hostname(self): return self.strategy.get_current_hostname() def set_current_hostname(self, name): self.strategy.set_current_hostname(name) def get_permanent_hostname(self): return self.strategy.get_permanent_hostname() def set_permanent_hostname(self, name): self.strategy.set_permanent_hostname(name) class BaseStrategy(object): def __init__(self, module): self.module = module self.changed = False def update_current_and_permanent_hostname(self): self.update_current_hostname() self.update_permanent_hostname() return self.changed def update_current_hostname(self): name = self.module.params['name'] current_name = self.get_current_hostname() if current_name != name: if not self.module.check_mode: self.set_current_hostname(name) self.changed = True def update_permanent_hostname(self): name = self.module.params['name'] permanent_name = self.get_permanent_hostname() if permanent_name != name: if not self.module.check_mode: self.set_permanent_hostname(name) self.changed = True def get_current_hostname(self): return self.get_permanent_hostname() def set_current_hostname(self, name): pass def get_permanent_hostname(self): raise NotImplementedError def set_permanent_hostname(self, name): raise NotImplementedError class CommandStrategy(BaseStrategy): COMMAND = 'hostname' def __init__(self, module): super(CommandStrategy, self).__init__(module) self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True) def get_current_hostname(self): cmd = [self.hostname_cmd] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) return to_native(out).strip() def set_current_hostname(self, name): cmd = [self.hostname_cmd, name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) def get_permanent_hostname(self): return 'UNKNOWN' def set_permanent_hostname(self, name): pass class FileStrategy(BaseStrategy): FILE = '/etc/hostname' def get_permanent_hostname(self): if not os.path.isfile(self.FILE): return '' try: return get_file_lines(self.FILE) except Exception as e: self.module.fail_json( msg="failed to read hostname: %s" % to_native(e), exception=traceback.format_exc()) def set_permanent_hostname(self, name): try: with open(self.FILE, 'w+') as f: f.write("%s\n" % name) except Exception as e: self.module.fail_json( msg="failed to update hostname: %s" % to_native(e), exception=traceback.format_exc()) class SLESStrategy(FileStrategy): """ This is a SLES Hostname strategy class - it edits the /etc/HOSTNAME file. """ FILE = '/etc/HOSTNAME' class RedHatStrategy(BaseStrategy): """ This is a Redhat Hostname strategy class - it edits the /etc/sysconfig/network file. """ NETWORK_FILE = '/etc/sysconfig/network' def get_permanent_hostname(self): try: for line in get_file_lines(self.NETWORK_FILE): line = to_native(line).strip() if line.startswith('HOSTNAME'): k, v = line.split('=') return v.strip() self.module.fail_json( "Unable to locate HOSTNAME entry in %s" % self.NETWORK_FILE ) except Exception as e: self.module.fail_json( msg="failed to read hostname: %s" % to_native(e), exception=traceback.format_exc()) def set_permanent_hostname(self, name): try: lines = [] found = False content = get_file_content(self.NETWORK_FILE, strip=False) or "" for line in content.splitlines(True): line = to_native(line) if line.strip().startswith('HOSTNAME'): lines.append("HOSTNAME=%s\n" % name) found = True else: lines.append(line) if not found: lines.append("HOSTNAME=%s\n" % name) with open(self.NETWORK_FILE, 'w+') as f: f.writelines(lines) except Exception as e: self.module.fail_json( msg="failed to update hostname: %s" % to_native(e), exception=traceback.format_exc()) class AlpineStrategy(FileStrategy): """ This is a Alpine Linux Hostname manipulation strategy class - it edits the /etc/hostname file then run hostname -F /etc/hostname. """ FILE = '/etc/hostname' COMMAND = 'hostname' def set_current_hostname(self, name): super(AlpineStrategy, self).set_current_hostname(name) hostname_cmd = self.module.get_bin_path(self.COMMAND, True) cmd = [hostname_cmd, '-F', self.FILE] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) class SystemdStrategy(BaseStrategy): """ This is a Systemd hostname manipulation strategy class - it uses the hostnamectl command. """ COMMAND = "hostnamectl" def __init__(self, module): super(SystemdStrategy, self).__init__(module) self.hostnamectl_cmd = self.module.get_bin_path(self.COMMAND, True) def get_current_hostname(self): cmd = [self.hostnamectl_cmd, '--transient', 'status'] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) return to_native(out).strip() def set_current_hostname(self, name): if len(name) > 64: self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name") cmd = [self.hostnamectl_cmd, '--transient', 'set-hostname', name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) def get_permanent_hostname(self): cmd = [self.hostnamectl_cmd, '--static', 'status'] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) return to_native(out).strip() def set_permanent_hostname(self, name): if len(name) > 64: self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name") cmd = [self.hostnamectl_cmd, '--pretty', 'set-hostname', name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) cmd = [self.hostnamectl_cmd, '--static', 'set-hostname', name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) class OpenRCStrategy(BaseStrategy): """ This is a Gentoo (OpenRC) Hostname manipulation strategy class - it edits the /etc/conf.d/hostname file. """ FILE = '/etc/conf.d/hostname' def get_permanent_hostname(self): if not os.path.isfile(self.FILE): return '' try: for line in get_file_lines(self.FILE): line = line.strip() if line.startswith('hostname='): return line[10:].strip('"') except Exception as e: self.module.fail_json( msg="failed to read hostname: %s" % to_native(e), exception=traceback.format_exc()) def set_permanent_hostname(self, name): try: lines = [x.strip() for x in get_file_lines(self.FILE)] for i, line in enumerate(lines): if line.startswith('hostname='): lines[i] = 'hostname="%s"' % name break with open(self.FILE, 'w') as f: f.write('\n'.join(lines) + '\n') except Exception as e: self.module.fail_json( msg="failed to update hostname: %s" % to_native(e), exception=traceback.format_exc()) class OpenBSDStrategy(FileStrategy): """ This is a OpenBSD family Hostname manipulation strategy class - it edits the /etc/myname file. """ FILE = '/etc/myname' class SolarisStrategy(BaseStrategy): """ This is a Solaris11 or later Hostname manipulation strategy class - it execute hostname command. """ COMMAND = "hostname" def __init__(self, module): super(SolarisStrategy, self).__init__(module) self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True) def set_current_hostname(self, name): cmd_option = '-t' cmd = [self.hostname_cmd, cmd_option, name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) def get_permanent_hostname(self): fmri = 'svc:/system/identity:node' pattern = 'config/nodename' cmd = '/usr/sbin/svccfg -s %s listprop -o value %s' % (fmri, pattern) rc, out, err = self.module.run_command(cmd, use_unsafe_shell=True) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) return to_native(out).strip() def set_permanent_hostname(self, name): cmd = [self.hostname_cmd, name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) class FreeBSDStrategy(BaseStrategy): """ This is a FreeBSD hostname manipulation strategy class - it edits the /etc/rc.conf.d/hostname file. """ FILE = '/etc/rc.conf.d/hostname' COMMAND = "hostname" def __init__(self, module): super(FreeBSDStrategy, self).__init__(module) self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True) def get_current_hostname(self): cmd = [self.hostname_cmd] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) return to_native(out).strip() def set_current_hostname(self, name): cmd = [self.hostname_cmd, name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) def get_permanent_hostname(self): if not os.path.isfile(self.FILE): return '' try: for line in get_file_lines(self.FILE): line = line.strip() if line.startswith('hostname='): return line[10:].strip('"') except Exception as e: self.module.fail_json( msg="failed to read hostname: %s" % to_native(e), exception=traceback.format_exc()) def set_permanent_hostname(self, name): try: if os.path.isfile(self.FILE): lines = [x.strip() for x in get_file_lines(self.FILE)] for i, line in enumerate(lines): if line.startswith('hostname='): lines[i] = 'hostname="%s"' % name break else: lines = ['hostname="%s"' % name] with open(self.FILE, 'w') as f: f.write('\n'.join(lines) + '\n') except Exception as e: self.module.fail_json( msg="failed to update hostname: %s" % to_native(e), exception=traceback.format_exc()) class DarwinStrategy(BaseStrategy): """ This is a macOS hostname manipulation strategy class. It uses /usr/sbin/scutil to set ComputerName, HostName, and LocalHostName. HostName corresponds to what most platforms consider to be hostname. It controls the name used on the command line and SSH. However, macOS also has LocalHostName and ComputerName settings. LocalHostName controls the Bonjour/ZeroConf name, used by services like AirDrop. This class implements a method, _scrub_hostname(), that mimics the transformations macOS makes on hostnames when enterened in the Sharing preference pane. It replaces spaces with dashes and removes all special characters. ComputerName is the name used for user-facing GUI services, like the System Preferences/Sharing pane and when users connect to the Mac over the network. """ def __init__(self, module): super(DarwinStrategy, self).__init__(module) self.scutil = self.module.get_bin_path('scutil', True) self.name_types = ('HostName', 'ComputerName', 'LocalHostName') self.scrubbed_name = self._scrub_hostname(self.module.params['name']) def _make_translation(self, replace_chars, replacement_chars, delete_chars): if PY3: return str.maketrans(replace_chars, replacement_chars, delete_chars) if not isinstance(replace_chars, text_type) or not isinstance(replacement_chars, text_type): raise ValueError('replace_chars and replacement_chars must both be strings') if len(replace_chars) != len(replacement_chars): raise ValueError('replacement_chars must be the same length as replace_chars') table = dict(zip((ord(c) for c in replace_chars), replacement_chars)) for char in delete_chars: table[ord(char)] = None return table def _scrub_hostname(self, name): """ LocalHostName only accepts valid DNS characters while HostName and ComputerName accept a much wider range of characters. This function aims to mimic how macOS translates a friendly name to the LocalHostName. """ # Replace all these characters with a single dash name = to_text(name) replace_chars = u'\'"~`!@#$%^&*(){}[]/=?+\\|-_ ' delete_chars = u".'" table = self._make_translation(replace_chars, u'-' * len(replace_chars), delete_chars) name = name.translate(table) # Replace multiple dashes with a single dash while '-' * 2 in name: name = name.replace('-' * 2, '') name = name.rstrip('-') return name def get_current_hostname(self): cmd = [self.scutil, '--get', 'HostName'] rc, out, err = self.module.run_command(cmd) if rc != 0 and 'HostName: not set' not in err: self.module.fail_json(msg="Failed to get current hostname rc=%d, out=%s, err=%s" % (rc, out, err)) return to_native(out).strip() def get_permanent_hostname(self): cmd = [self.scutil, '--get', 'ComputerName'] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Failed to get permanent hostname rc=%d, out=%s, err=%s" % (rc, out, err)) return to_native(out).strip() def set_permanent_hostname(self, name): for hostname_type in self.name_types: cmd = [self.scutil, '--set', hostname_type] if hostname_type == 'LocalHostName': cmd.append(to_native(self.scrubbed_name)) else: cmd.append(to_native(name)) rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Failed to set {3} to '{2}': {0} {1}".format(to_native(out), to_native(err), to_native(name), hostname_type)) def set_current_hostname(self, name): pass def update_current_hostname(self): pass def update_permanent_hostname(self): name = self.module.params['name'] # Get all the current host name values in the order of self.name_types all_names = tuple(self.module.run_command([self.scutil, '--get', name_type])[1].strip() for name_type in self.name_types) # Get the expected host name values based on the order in self.name_types expected_names = tuple(self.scrubbed_name if n == 'LocalHostName' else name for n in self.name_types) # Ensure all three names are updated if all_names != expected_names: if not self.module.check_mode: self.set_permanent_hostname(name) self.changed = True class SLESHostname(Hostname): platform = 'Linux' distribution = 'Sles' try: distribution_version = get_distribution_version() # cast to float may raise ValueError on non SLES, we use float for a little more safety over int if distribution_version and 10 <= float(distribution_version) <= 12: strategy_class = SLESStrategy else: raise ValueError() except ValueError: strategy_class = UnimplementedStrategy class RHELHostname(Hostname): platform = 'Linux' distribution = 'Redhat' strategy_class = RedHatStrategy class CentOSHostname(Hostname): platform = 'Linux' distribution = 'Centos' strategy_class = RedHatStrategy class AnolisOSHostname(Hostname): platform = 'Linux' distribution = 'Anolis' strategy_class = RedHatStrategy class CloudlinuxserverHostname(Hostname): platform = 'Linux' distribution = 'Cloudlinuxserver' strategy_class = RedHatStrategy class CloudlinuxHostname(Hostname): platform = 'Linux' distribution = 'Cloudlinux' strategy_class = RedHatStrategy class AlinuxHostname(Hostname): platform = 'Linux' distribution = 'Alinux' strategy_class = RedHatStrategy class ScientificHostname(Hostname): platform = 'Linux' distribution = 'Scientific' strategy_class = RedHatStrategy class OracleLinuxHostname(Hostname): platform = 'Linux' distribution = 'Oracle' strategy_class = RedHatStrategy class VirtuozzoLinuxHostname(Hostname): platform = 'Linux' distribution = 'Virtuozzo' strategy_class = RedHatStrategy class AmazonLinuxHostname(Hostname): platform = 'Linux' distribution = 'Amazon' strategy_class = RedHatStrategy class DebianHostname(Hostname): platform = 'Linux' distribution = 'Debian' strategy_class = FileStrategy class KylinHostname(Hostname): platform = 'Linux' distribution = 'Kylin' strategy_class = FileStrategy class CumulusHostname(Hostname): platform = 'Linux' distribution = 'Cumulus-linux' strategy_class = FileStrategy class KaliHostname(Hostname): platform = 'Linux' distribution = 'Kali' strategy_class = FileStrategy class ParrotHostname(Hostname): platform = 'Linux' distribution = 'Parrot' strategy_class = FileStrategy class UbuntuHostname(Hostname): platform = 'Linux' distribution = 'Ubuntu' strategy_class = FileStrategy class LinuxmintHostname(Hostname): platform = 'Linux' distribution = 'Linuxmint' strategy_class = FileStrategy class LinaroHostname(Hostname): platform = 'Linux' distribution = 'Linaro' strategy_class = FileStrategy class DevuanHostname(Hostname): platform = 'Linux' distribution = 'Devuan' strategy_class = FileStrategy class RaspbianHostname(Hostname): platform = 'Linux' distribution = 'Raspbian' strategy_class = FileStrategy class GentooHostname(Hostname): platform = 'Linux' distribution = 'Gentoo' strategy_class = OpenRCStrategy class ALTLinuxHostname(Hostname): platform = 'Linux' distribution = 'Altlinux' strategy_class = RedHatStrategy class AlpineLinuxHostname(Hostname): platform = 'Linux' distribution = 'Alpine' strategy_class = AlpineStrategy class OpenBSDHostname(Hostname): platform = 'OpenBSD' distribution = None strategy_class = OpenBSDStrategy class SolarisHostname(Hostname): platform = 'SunOS' distribution = None strategy_class = SolarisStrategy class FreeBSDHostname(Hostname): platform = 'FreeBSD' distribution = None strategy_class = FreeBSDStrategy class NetBSDHostname(Hostname): platform = 'NetBSD' distribution = None strategy_class = FreeBSDStrategy class NeonHostname(Hostname): platform = 'Linux' distribution = 'Neon' strategy_class = FileStrategy class DarwinHostname(Hostname): platform = 'Darwin' distribution = None strategy_class = DarwinStrategy class VoidLinuxHostname(Hostname): platform = 'Linux' distribution = 'Void' strategy_class = FileStrategy class PopHostname(Hostname): platform = 'Linux' distribution = 'Pop' strategy_class = FileStrategy class EurolinuxHostname(Hostname): platform = 'Linux' distribution = 'Eurolinux' strategy_class = RedHatStrategy def main(): module = AnsibleModule( argument_spec=dict( name=dict(type='str', required=True), use=dict(type='str', choices=STRATS.keys()) ), supports_check_mode=True, ) hostname = Hostname(module) name = module.params['name'] current_hostname = hostname.get_current_hostname() permanent_hostname = hostname.get_permanent_hostname() changed = hostname.update_current_and_permanent_hostname() if name != current_hostname: name_before = current_hostname elif name != permanent_hostname: name_before = permanent_hostname else: name_before = permanent_hostname # NOTE: socket.getfqdn() calls gethostbyaddr(socket.gethostname()), which can be # slow to return if the name does not resolve correctly. kw = dict(changed=changed, name=name, ansible_facts=dict(ansible_hostname=name.split('.')[0], ansible_nodename=name, ansible_fqdn=socket.getfqdn(), ansible_domain='.'.join(socket.getfqdn().split('.')[1:]))) if changed: kw['diff'] = {'after': 'hostname = ' + name + '\n', 'before': 'hostname = ' + name_before + '\n'} module.exit_json(**kw) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
77,108
Empty strings are accepted when not listed in 'choices'.
### Summary When creating a new plugin, if an argument uses a list of pre-defined `choices`, even if an empty string ('') is not defined in the selection, it is accepted as a valid value in the playbook. The problem is that `diff_list` is evaluated to "", and thus, the test that should raise an error evaluates to false (`lib/ansible/module_utils/common/parametres.py`): https://github.com/ansible/ansible/blob/36121aeee7812e7f37dd49a64c0dbf9cf741878f/lib/ansible/module_utils/common/parameters.py#L653..L660 ### Issue Type Bug Report ### Component Name Any plugin that uses `choices` in one of its parameters. ### Ansible Version ```console $ ansible --version ansible [core 2.12.2] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/rjeffman/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.10/site-packages/ansible ansible collection location = /home/rjeffman/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.10.2 (main, Jan 17 2022, 00:00:00) [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)] jinja version = 3.0.1 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed $ ``` ### OS / Environment Any. ### Steps to Reproduce If you have a plugin (e.g: `myplugin`) where `argument_spec` list a parameter defined like: ``` ch_param=dict(type="list", choices=["A", "B", "C"]), ``` Then the following task would not raise an error: ```yaml (paste below) - myplugin: ch_param: "" ``` ### Expected Results If a value provide by a parameter with a list of valid choices, it is expected that any value outside of this list would raise an error. ### Actual Results ```console No error is reported by Ansible. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77108
https://github.com/ansible/ansible/pull/77119
d60efd97687803fd184ac53aa691bd4e0ec43170
4f48f375a0203b0d09c55522a86300a52da5b24a
2022-02-22T19:09:47Z
python
2022-02-24T19:08:33Z
changelogs/fragments/77108_params_blank.yml
closed
ansible/ansible
https://github.com/ansible/ansible
77,108
Empty strings are accepted when not listed in 'choices'.
### Summary When creating a new plugin, if an argument uses a list of pre-defined `choices`, even if an empty string ('') is not defined in the selection, it is accepted as a valid value in the playbook. The problem is that `diff_list` is evaluated to "", and thus, the test that should raise an error evaluates to false (`lib/ansible/module_utils/common/parametres.py`): https://github.com/ansible/ansible/blob/36121aeee7812e7f37dd49a64c0dbf9cf741878f/lib/ansible/module_utils/common/parameters.py#L653..L660 ### Issue Type Bug Report ### Component Name Any plugin that uses `choices` in one of its parameters. ### Ansible Version ```console $ ansible --version ansible [core 2.12.2] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/rjeffman/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.10/site-packages/ansible ansible collection location = /home/rjeffman/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.10.2 (main, Jan 17 2022, 00:00:00) [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)] jinja version = 3.0.1 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed $ ``` ### OS / Environment Any. ### Steps to Reproduce If you have a plugin (e.g: `myplugin`) where `argument_spec` list a parameter defined like: ``` ch_param=dict(type="list", choices=["A", "B", "C"]), ``` Then the following task would not raise an error: ```yaml (paste below) - myplugin: ch_param: "" ``` ### Expected Results If a value provide by a parameter with a list of valid choices, it is expected that any value outside of this list would raise an error. ### Actual Results ```console No error is reported by Ansible. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77108
https://github.com/ansible/ansible/pull/77119
d60efd97687803fd184ac53aa691bd4e0ec43170
4f48f375a0203b0d09c55522a86300a52da5b24a
2022-02-22T19:09:47Z
python
2022-02-24T19:08:33Z
lib/ansible/module_utils/common/parameters.py
# -*- coding: utf-8 -*- # Copyright (c) 2019 Ansible Project # Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause) from __future__ import absolute_import, division, print_function __metaclass__ = type import datetime import os from collections import deque from itertools import chain from ansible.module_utils.common.collections import is_iterable from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text from ansible.module_utils.common.text.formatters import lenient_lowercase from ansible.module_utils.common.warnings import warn from ansible.module_utils.errors import ( AliasError, AnsibleFallbackNotFound, AnsibleValidationErrorMultiple, ArgumentTypeError, ArgumentValueError, ElementError, MutuallyExclusiveError, NoLogError, RequiredByError, RequiredError, RequiredIfError, RequiredOneOfError, RequiredTogetherError, SubParameterTypeError, ) from ansible.module_utils.parsing.convert_bool import BOOLEANS_FALSE, BOOLEANS_TRUE from ansible.module_utils.common._collections_compat import ( KeysView, Set, Sequence, Mapping, MutableMapping, MutableSet, MutableSequence, ) from ansible.module_utils.six import ( binary_type, integer_types, string_types, text_type, PY2, PY3, ) from ansible.module_utils.common.validation import ( check_mutually_exclusive, check_required_arguments, check_required_together, check_required_one_of, check_required_if, check_required_by, check_type_bits, check_type_bool, check_type_bytes, check_type_dict, check_type_float, check_type_int, check_type_jsonarg, check_type_list, check_type_path, check_type_raw, check_type_str, ) # Python2 & 3 way to get NoneType NoneType = type(None) _ADDITIONAL_CHECKS = ( {'func': check_required_together, 'attr': 'required_together', 'err': RequiredTogetherError}, {'func': check_required_one_of, 'attr': 'required_one_of', 'err': RequiredOneOfError}, {'func': check_required_if, 'attr': 'required_if', 'err': RequiredIfError}, {'func': check_required_by, 'attr': 'required_by', 'err': RequiredByError}, ) # if adding boolean attribute, also add to PASS_BOOL # some of this dupes defaults from controller config PASS_VARS = { 'check_mode': ('check_mode', False), 'debug': ('_debug', False), 'diff': ('_diff', False), 'keep_remote_files': ('_keep_remote_files', False), 'module_name': ('_name', None), 'no_log': ('no_log', False), 'remote_tmp': ('_remote_tmp', None), 'selinux_special_fs': ('_selinux_special_fs', ['fuse', 'nfs', 'vboxsf', 'ramfs', '9p', 'vfat']), 'shell_executable': ('_shell', '/bin/sh'), 'socket': ('_socket_path', None), 'string_conversion_action': ('_string_conversion_action', 'warn'), 'syslog_facility': ('_syslog_facility', 'INFO'), 'tmpdir': ('_tmpdir', None), 'verbosity': ('_verbosity', 0), 'version': ('ansible_version', '0.0'), } PASS_BOOLS = ('check_mode', 'debug', 'diff', 'keep_remote_files', 'no_log') DEFAULT_TYPE_VALIDATORS = { 'str': check_type_str, 'list': check_type_list, 'dict': check_type_dict, 'bool': check_type_bool, 'int': check_type_int, 'float': check_type_float, 'path': check_type_path, 'raw': check_type_raw, 'jsonarg': check_type_jsonarg, 'json': check_type_jsonarg, 'bytes': check_type_bytes, 'bits': check_type_bits, } def _get_type_validator(wanted): """Returns the callable used to validate a wanted type and the type name. :arg wanted: String or callable. If a string, get the corresponding validation function from DEFAULT_TYPE_VALIDATORS. If callable, get the name of the custom callable and return that for the type_checker. :returns: Tuple of callable function or None, and a string that is the name of the wanted type. """ # Use one of our builtin validators. if not callable(wanted): if wanted is None: # Default type for parameters wanted = 'str' type_checker = DEFAULT_TYPE_VALIDATORS.get(wanted) # Use the custom callable for validation. else: type_checker = wanted wanted = getattr(wanted, '__name__', to_native(type(wanted))) return type_checker, wanted def _get_legal_inputs(argument_spec, parameters, aliases=None): if aliases is None: aliases = _handle_aliases(argument_spec, parameters) return list(aliases.keys()) + list(argument_spec.keys()) def _get_unsupported_parameters(argument_spec, parameters, legal_inputs=None, options_context=None): """Check keys in parameters against those provided in legal_inputs to ensure they contain legal values. If legal_inputs are not supplied, they will be generated using the argument_spec. :arg argument_spec: Dictionary of parameters, their type, and valid values. :arg parameters: Dictionary of parameters. :arg legal_inputs: List of valid key names property names. Overrides values in argument_spec. :arg options_context: List of parent keys for tracking the context of where a parameter is defined. :returns: Set of unsupported parameters. Empty set if no unsupported parameters are found. """ if legal_inputs is None: legal_inputs = _get_legal_inputs(argument_spec, parameters) unsupported_parameters = set() for k in parameters.keys(): if k not in legal_inputs: context = k if options_context: context = tuple(options_context + [k]) unsupported_parameters.add(context) return unsupported_parameters def _handle_aliases(argument_spec, parameters, alias_warnings=None, alias_deprecations=None): """Process aliases from an argument_spec including warnings and deprecations. Modify ``parameters`` by adding a new key for each alias with the supplied value from ``parameters``. If a list is provided to the alias_warnings parameter, it will be filled with tuples (option, alias) in every case where both an option and its alias are specified. If a list is provided to alias_deprecations, it will be populated with dictionaries, each containing deprecation information for each alias found in argument_spec. :param argument_spec: Dictionary of parameters, their type, and valid values. :type argument_spec: dict :param parameters: Dictionary of parameters. :type parameters: dict :param alias_warnings: :type alias_warnings: list :param alias_deprecations: :type alias_deprecations: list """ aliases_results = {} # alias:canon for (k, v) in argument_spec.items(): aliases = v.get('aliases', None) default = v.get('default', None) required = v.get('required', False) if alias_deprecations is not None: for alias in argument_spec[k].get('deprecated_aliases', []): if alias.get('name') in parameters: alias_deprecations.append(alias) if default is not None and required: # not alias specific but this is a good place to check this raise ValueError("internal error: required and default are mutually exclusive for %s" % k) if aliases is None: continue if not is_iterable(aliases) or isinstance(aliases, (binary_type, text_type)): raise TypeError('internal error: aliases must be a list or tuple') for alias in aliases: aliases_results[alias] = k if alias in parameters: if k in parameters and alias_warnings is not None: alias_warnings.append((k, alias)) parameters[k] = parameters[alias] return aliases_results def _list_deprecations(argument_spec, parameters, prefix=''): """Return a list of deprecations :arg argument_spec: An argument spec dictionary :arg parameters: Dictionary of parameters :returns: List of dictionaries containing a message and version in which the deprecated parameter will be removed, or an empty list. :Example return: .. code-block:: python [ { 'msg': "Param 'deptest' is deprecated. See the module docs for more information", 'version': '2.9' } ] """ deprecations = [] for arg_name, arg_opts in argument_spec.items(): if arg_name in parameters: if prefix: sub_prefix = '%s["%s"]' % (prefix, arg_name) else: sub_prefix = arg_name if arg_opts.get('removed_at_date') is not None: deprecations.append({ 'msg': "Param '%s' is deprecated. See the module docs for more information" % sub_prefix, 'date': arg_opts.get('removed_at_date'), 'collection_name': arg_opts.get('removed_from_collection'), }) elif arg_opts.get('removed_in_version') is not None: deprecations.append({ 'msg': "Param '%s' is deprecated. See the module docs for more information" % sub_prefix, 'version': arg_opts.get('removed_in_version'), 'collection_name': arg_opts.get('removed_from_collection'), }) # Check sub-argument spec sub_argument_spec = arg_opts.get('options') if sub_argument_spec is not None: sub_arguments = parameters[arg_name] if isinstance(sub_arguments, Mapping): sub_arguments = [sub_arguments] if isinstance(sub_arguments, list): for sub_params in sub_arguments: if isinstance(sub_params, Mapping): deprecations.extend(_list_deprecations(sub_argument_spec, sub_params, prefix=sub_prefix)) return deprecations def _list_no_log_values(argument_spec, params): """Return set of no log values :arg argument_spec: An argument spec dictionary :arg params: Dictionary of all parameters :returns: :class:`set` of strings that should be hidden from output: """ no_log_values = set() for arg_name, arg_opts in argument_spec.items(): if arg_opts.get('no_log', False): # Find the value for the no_log'd param no_log_object = params.get(arg_name, None) if no_log_object: try: no_log_values.update(_return_datastructure_name(no_log_object)) except TypeError as e: raise TypeError('Failed to convert "%s": %s' % (arg_name, to_native(e))) # Get no_log values from suboptions sub_argument_spec = arg_opts.get('options') if sub_argument_spec is not None: wanted_type = arg_opts.get('type') sub_parameters = params.get(arg_name) if sub_parameters is not None: if wanted_type == 'dict' or (wanted_type == 'list' and arg_opts.get('elements', '') == 'dict'): # Sub parameters can be a dict or list of dicts. Ensure parameters are always a list. if not isinstance(sub_parameters, list): sub_parameters = [sub_parameters] for sub_param in sub_parameters: # Validate dict fields in case they came in as strings if isinstance(sub_param, string_types): sub_param = check_type_dict(sub_param) if not isinstance(sub_param, Mapping): raise TypeError("Value '{1}' in the sub parameter field '{0}' must by a {2}, " "not '{1.__class__.__name__}'".format(arg_name, sub_param, wanted_type)) no_log_values.update(_list_no_log_values(sub_argument_spec, sub_param)) return no_log_values def _return_datastructure_name(obj): """ Return native stringified values from datastructures. For use with removing sensitive values pre-jsonification.""" if isinstance(obj, (text_type, binary_type)): if obj: yield to_native(obj, errors='surrogate_or_strict') return elif isinstance(obj, Mapping): for element in obj.items(): for subelement in _return_datastructure_name(element[1]): yield subelement elif is_iterable(obj): for element in obj: for subelement in _return_datastructure_name(element): yield subelement elif isinstance(obj, (bool, NoneType)): # This must come before int because bools are also ints return elif isinstance(obj, tuple(list(integer_types) + [float])): yield to_native(obj, nonstring='simplerepr') else: raise TypeError('Unknown parameter type: %s' % (type(obj))) def _remove_values_conditions(value, no_log_strings, deferred_removals): """ Helper function for :meth:`remove_values`. :arg value: The value to check for strings that need to be stripped :arg no_log_strings: set of strings which must be stripped out of any values :arg deferred_removals: List which holds information about nested containers that have to be iterated for removals. It is passed into this function so that more entries can be added to it if value is a container type. The format of each entry is a 2-tuple where the first element is the ``value`` parameter and the second value is a new container to copy the elements of ``value`` into once iterated. :returns: if ``value`` is a scalar, returns ``value`` with two exceptions: 1. :class:`~datetime.datetime` objects which are changed into a string representation. 2. objects which are in ``no_log_strings`` are replaced with a placeholder so that no sensitive data is leaked. If ``value`` is a container type, returns a new empty container. ``deferred_removals`` is added to as a side-effect of this function. .. warning:: It is up to the caller to make sure the order in which value is passed in is correct. For instance, higher level containers need to be passed in before lower level containers. For example, given ``{'level1': {'level2': 'level3': [True]} }`` first pass in the dictionary for ``level1``, then the dict for ``level2``, and finally the list for ``level3``. """ if isinstance(value, (text_type, binary_type)): # Need native str type native_str_value = value if isinstance(value, text_type): value_is_text = True if PY2: native_str_value = to_bytes(value, errors='surrogate_or_strict') elif isinstance(value, binary_type): value_is_text = False if PY3: native_str_value = to_text(value, errors='surrogate_or_strict') if native_str_value in no_log_strings: return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER' for omit_me in no_log_strings: native_str_value = native_str_value.replace(omit_me, '*' * 8) if value_is_text and isinstance(native_str_value, binary_type): value = to_text(native_str_value, encoding='utf-8', errors='surrogate_then_replace') elif not value_is_text and isinstance(native_str_value, text_type): value = to_bytes(native_str_value, encoding='utf-8', errors='surrogate_then_replace') else: value = native_str_value elif isinstance(value, Sequence): if isinstance(value, MutableSequence): new_value = type(value)() else: new_value = [] # Need a mutable value deferred_removals.append((value, new_value)) value = new_value elif isinstance(value, Set): if isinstance(value, MutableSet): new_value = type(value)() else: new_value = set() # Need a mutable value deferred_removals.append((value, new_value)) value = new_value elif isinstance(value, Mapping): if isinstance(value, MutableMapping): new_value = type(value)() else: new_value = {} # Need a mutable value deferred_removals.append((value, new_value)) value = new_value elif isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))): stringy_value = to_native(value, encoding='utf-8', errors='surrogate_or_strict') if stringy_value in no_log_strings: return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER' for omit_me in no_log_strings: if omit_me in stringy_value: return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER' elif isinstance(value, (datetime.datetime, datetime.date)): value = value.isoformat() else: raise TypeError('Value of unknown type: %s, %s' % (type(value), value)) return value def _set_defaults(argument_spec, parameters, set_default=True): """Set default values for parameters when no value is supplied. Modifies parameters directly. :arg argument_spec: Argument spec :type argument_spec: dict :arg parameters: Parameters to evaluate :type parameters: dict :kwarg set_default: Whether or not to set the default values :type set_default: bool :returns: Set of strings that should not be logged. :rtype: set """ no_log_values = set() for param, value in argument_spec.items(): # TODO: Change the default value from None to Sentinel to differentiate between # user supplied None and a default value set by this function. default = value.get('default', None) # This prevents setting defaults on required items on the 1st run, # otherwise will set things without a default to None on the 2nd. if param not in parameters and (default is not None or set_default): # Make sure any default value for no_log fields are masked. if value.get('no_log', False) and default: no_log_values.add(default) parameters[param] = default return no_log_values def _sanitize_keys_conditions(value, no_log_strings, ignore_keys, deferred_removals): """ Helper method to :func:`sanitize_keys` to build ``deferred_removals`` and avoid deep recursion. """ if isinstance(value, (text_type, binary_type)): return value if isinstance(value, Sequence): if isinstance(value, MutableSequence): new_value = type(value)() else: new_value = [] # Need a mutable value deferred_removals.append((value, new_value)) return new_value if isinstance(value, Set): if isinstance(value, MutableSet): new_value = type(value)() else: new_value = set() # Need a mutable value deferred_removals.append((value, new_value)) return new_value if isinstance(value, Mapping): if isinstance(value, MutableMapping): new_value = type(value)() else: new_value = {} # Need a mutable value deferred_removals.append((value, new_value)) return new_value if isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))): return value if isinstance(value, (datetime.datetime, datetime.date)): return value raise TypeError('Value of unknown type: %s, %s' % (type(value), value)) def _validate_elements(wanted_type, parameter, values, options_context=None, errors=None): if errors is None: errors = AnsibleValidationErrorMultiple() type_checker, wanted_element_type = _get_type_validator(wanted_type) validated_parameters = [] # Get param name for strings so we can later display this value in a useful error message if needed # Only pass 'kwargs' to our checkers and ignore custom callable checkers kwargs = {} if wanted_element_type == 'str' and isinstance(wanted_type, string_types): if isinstance(parameter, string_types): kwargs['param'] = parameter elif isinstance(parameter, dict): kwargs['param'] = list(parameter.keys())[0] for value in values: try: validated_parameters.append(type_checker(value, **kwargs)) except (TypeError, ValueError) as e: msg = "Elements value for option '%s'" % parameter if options_context: msg += " found in '%s'" % " -> ".join(options_context) msg += " is of type %s and we were unable to convert to %s: %s" % (type(value), wanted_element_type, to_native(e)) errors.append(ElementError(msg)) return validated_parameters def _validate_argument_types(argument_spec, parameters, prefix='', options_context=None, errors=None): """Validate that parameter types match the type in the argument spec. Determine the appropriate type checker function and run each parameter value through that function. All error messages from type checker functions are returned. If any parameter fails to validate, it will not be in the returned parameters. :arg argument_spec: Argument spec :type argument_spec: dict :arg parameters: Parameters :type parameters: dict :kwarg prefix: Name of the parent key that contains the spec. Used in the error message :type prefix: str :kwarg options_context: List of contexts? :type options_context: list :returns: Two item tuple containing validated and coerced parameters and a list of any errors that were encountered. :rtype: tuple """ if errors is None: errors = AnsibleValidationErrorMultiple() for param, spec in argument_spec.items(): if param not in parameters: continue value = parameters[param] if value is None: continue wanted_type = spec.get('type') type_checker, wanted_name = _get_type_validator(wanted_type) # Get param name for strings so we can later display this value in a useful error message if needed # Only pass 'kwargs' to our checkers and ignore custom callable checkers kwargs = {} if wanted_name == 'str' and isinstance(wanted_type, string_types): kwargs['param'] = list(parameters.keys())[0] # Get the name of the parent key if this is a nested option if prefix: kwargs['prefix'] = prefix try: parameters[param] = type_checker(value, **kwargs) elements_wanted_type = spec.get('elements', None) if elements_wanted_type: elements = parameters[param] if wanted_type != 'list' or not isinstance(elements, list): msg = "Invalid type %s for option '%s'" % (wanted_name, elements) if options_context: msg += " found in '%s'." % " -> ".join(options_context) msg += ", elements value check is supported only with 'list' type" errors.append(ArgumentTypeError(msg)) parameters[param] = _validate_elements(elements_wanted_type, param, elements, options_context, errors) except (TypeError, ValueError) as e: msg = "argument '%s' is of type %s" % (param, type(value)) if options_context: msg += " found in '%s'." % " -> ".join(options_context) msg += " and we were unable to convert to %s: %s" % (wanted_name, to_native(e)) errors.append(ArgumentTypeError(msg)) def _validate_argument_values(argument_spec, parameters, options_context=None, errors=None): """Ensure all arguments have the requested values, and there are no stray arguments""" if errors is None: errors = AnsibleValidationErrorMultiple() for param, spec in argument_spec.items(): choices = spec.get('choices') if choices is None: continue if isinstance(choices, (frozenset, KeysView, Sequence)) and not isinstance(choices, (binary_type, text_type)): if param in parameters: # Allow one or more when type='list' param with choices if isinstance(parameters[param], list): diff_list = ", ".join([item for item in parameters[param] if item not in choices]) if diff_list: choices_str = ", ".join([to_native(c) for c in choices]) msg = "value of %s must be one or more of: %s. Got no match for: %s" % (param, choices_str, diff_list) if options_context: msg = "{0} found in {1}".format(msg, " -> ".join(options_context)) errors.append(ArgumentValueError(msg)) elif parameters[param] not in choices: # PyYaml converts certain strings to bools. If we can unambiguously convert back, do so before checking # the value. If we can't figure this out, module author is responsible. if parameters[param] == 'False': overlap = BOOLEANS_FALSE.intersection(choices) if len(overlap) == 1: # Extract from a set (parameters[param],) = overlap if parameters[param] == 'True': overlap = BOOLEANS_TRUE.intersection(choices) if len(overlap) == 1: (parameters[param],) = overlap if parameters[param] not in choices: choices_str = ", ".join([to_native(c) for c in choices]) msg = "value of %s must be one of: %s, got: %s" % (param, choices_str, parameters[param]) if options_context: msg = "{0} found in {1}".format(msg, " -> ".join(options_context)) errors.append(ArgumentValueError(msg)) else: msg = "internal error: choices for argument %s are not iterable: %s" % (param, choices) if options_context: msg = "{0} found in {1}".format(msg, " -> ".join(options_context)) errors.append(ArgumentTypeError(msg)) def _validate_sub_spec(argument_spec, parameters, prefix='', options_context=None, errors=None, no_log_values=None, unsupported_parameters=None): """Validate sub argument spec. This function is recursive. """ if options_context is None: options_context = [] if errors is None: errors = AnsibleValidationErrorMultiple() if no_log_values is None: no_log_values = set() if unsupported_parameters is None: unsupported_parameters = set() for param, value in argument_spec.items(): wanted = value.get('type') if wanted == 'dict' or (wanted == 'list' and value.get('elements', '') == 'dict'): sub_spec = value.get('options') if value.get('apply_defaults', False): if sub_spec is not None: if parameters.get(param) is None: parameters[param] = {} else: continue elif sub_spec is None or param not in parameters or parameters[param] is None: continue # Keep track of context for warning messages options_context.append(param) # Make sure we can iterate over the elements if not isinstance(parameters[param], Sequence) or isinstance(parameters[param], string_types): elements = [parameters[param]] else: elements = parameters[param] for idx, sub_parameters in enumerate(elements): no_log_values.update(set_fallbacks(sub_spec, sub_parameters)) if not isinstance(sub_parameters, dict): errors.append(SubParameterTypeError("value of '%s' must be of type dict or list of dicts" % param)) continue # Set prefix for warning messages new_prefix = prefix + param if wanted == 'list': new_prefix += '[%d]' % idx new_prefix += '.' alias_warnings = [] alias_deprecations = [] try: options_aliases = _handle_aliases(sub_spec, sub_parameters, alias_warnings, alias_deprecations) except (TypeError, ValueError) as e: options_aliases = {} errors.append(AliasError(to_native(e))) for option, alias in alias_warnings: warn('Both option %s and its alias %s are set.' % (option, alias)) try: no_log_values.update(_list_no_log_values(sub_spec, sub_parameters)) except TypeError as te: errors.append(NoLogError(to_native(te))) legal_inputs = _get_legal_inputs(sub_spec, sub_parameters, options_aliases) unsupported_parameters.update(_get_unsupported_parameters(sub_spec, sub_parameters, legal_inputs, options_context)) try: check_mutually_exclusive(value.get('mutually_exclusive'), sub_parameters, options_context) except TypeError as e: errors.append(MutuallyExclusiveError(to_native(e))) no_log_values.update(_set_defaults(sub_spec, sub_parameters, False)) try: check_required_arguments(sub_spec, sub_parameters, options_context) except TypeError as e: errors.append(RequiredError(to_native(e))) _validate_argument_types(sub_spec, sub_parameters, new_prefix, options_context, errors=errors) _validate_argument_values(sub_spec, sub_parameters, options_context, errors=errors) for check in _ADDITIONAL_CHECKS: try: check['func'](value.get(check['attr']), sub_parameters, options_context) except TypeError as e: errors.append(check['err'](to_native(e))) no_log_values.update(_set_defaults(sub_spec, sub_parameters)) # Handle nested specs _validate_sub_spec(sub_spec, sub_parameters, new_prefix, options_context, errors, no_log_values, unsupported_parameters) options_context.pop() def env_fallback(*args, **kwargs): """Load value from environment variable""" for arg in args: if arg in os.environ: return os.environ[arg] raise AnsibleFallbackNotFound def set_fallbacks(argument_spec, parameters): no_log_values = set() for param, value in argument_spec.items(): fallback = value.get('fallback', (None,)) fallback_strategy = fallback[0] fallback_args = [] fallback_kwargs = {} if param not in parameters and fallback_strategy is not None: for item in fallback[1:]: if isinstance(item, dict): fallback_kwargs = item else: fallback_args = item try: fallback_value = fallback_strategy(*fallback_args, **fallback_kwargs) except AnsibleFallbackNotFound: continue else: if value.get('no_log', False) and fallback_value: no_log_values.add(fallback_value) parameters[param] = fallback_value return no_log_values def sanitize_keys(obj, no_log_strings, ignore_keys=frozenset()): """Sanitize the keys in a container object by removing ``no_log`` values from key names. This is a companion function to the :func:`remove_values` function. Similar to that function, we make use of ``deferred_removals`` to avoid hitting maximum recursion depth in cases of large data structures. :arg obj: The container object to sanitize. Non-container objects are returned unmodified. :arg no_log_strings: A set of string values we do not want logged. :kwarg ignore_keys: A set of string values of keys to not sanitize. :returns: An object with sanitized keys. """ deferred_removals = deque() no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings] new_value = _sanitize_keys_conditions(obj, no_log_strings, ignore_keys, deferred_removals) while deferred_removals: old_data, new_data = deferred_removals.popleft() if isinstance(new_data, Mapping): for old_key, old_elem in old_data.items(): if old_key in ignore_keys or old_key.startswith('_ansible'): new_data[old_key] = _sanitize_keys_conditions(old_elem, no_log_strings, ignore_keys, deferred_removals) else: # Sanitize the old key. We take advantage of the sanitizing code in # _remove_values_conditions() rather than recreating it here. new_key = _remove_values_conditions(old_key, no_log_strings, None) new_data[new_key] = _sanitize_keys_conditions(old_elem, no_log_strings, ignore_keys, deferred_removals) else: for elem in old_data: new_elem = _sanitize_keys_conditions(elem, no_log_strings, ignore_keys, deferred_removals) if isinstance(new_data, MutableSequence): new_data.append(new_elem) elif isinstance(new_data, MutableSet): new_data.add(new_elem) else: raise TypeError('Unknown container type encountered when removing private values from keys') return new_value def remove_values(value, no_log_strings): """Remove strings in ``no_log_strings`` from value. If value is a container type, then remove a lot more. Use of ``deferred_removals`` exists, rather than a pure recursive solution, because of the potential to hit the maximum recursion depth when dealing with large amounts of data (see `issue #24560 <https://github.com/ansible/ansible/issues/24560>`_). """ deferred_removals = deque() no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings] new_value = _remove_values_conditions(value, no_log_strings, deferred_removals) while deferred_removals: old_data, new_data = deferred_removals.popleft() if isinstance(new_data, Mapping): for old_key, old_elem in old_data.items(): new_elem = _remove_values_conditions(old_elem, no_log_strings, deferred_removals) new_data[old_key] = new_elem else: for elem in old_data: new_elem = _remove_values_conditions(elem, no_log_strings, deferred_removals) if isinstance(new_data, MutableSequence): new_data.append(new_elem) elif isinstance(new_data, MutableSet): new_data.add(new_elem) else: raise TypeError('Unknown container type encountered when removing private values from output') return new_value
closed
ansible/ansible
https://github.com/ansible/ansible
77,108
Empty strings are accepted when not listed in 'choices'.
### Summary When creating a new plugin, if an argument uses a list of pre-defined `choices`, even if an empty string ('') is not defined in the selection, it is accepted as a valid value in the playbook. The problem is that `diff_list` is evaluated to "", and thus, the test that should raise an error evaluates to false (`lib/ansible/module_utils/common/parametres.py`): https://github.com/ansible/ansible/blob/36121aeee7812e7f37dd49a64c0dbf9cf741878f/lib/ansible/module_utils/common/parameters.py#L653..L660 ### Issue Type Bug Report ### Component Name Any plugin that uses `choices` in one of its parameters. ### Ansible Version ```console $ ansible --version ansible [core 2.12.2] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/rjeffman/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.10/site-packages/ansible ansible collection location = /home/rjeffman/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.10.2 (main, Jan 17 2022, 00:00:00) [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)] jinja version = 3.0.1 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed $ ``` ### OS / Environment Any. ### Steps to Reproduce If you have a plugin (e.g: `myplugin`) where `argument_spec` list a parameter defined like: ``` ch_param=dict(type="list", choices=["A", "B", "C"]), ``` Then the following task would not raise an error: ```yaml (paste below) - myplugin: ch_param: "" ``` ### Expected Results If a value provide by a parameter with a list of valid choices, it is expected that any value outside of this list would raise an error. ### Actual Results ```console No error is reported by Ansible. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77108
https://github.com/ansible/ansible/pull/77119
d60efd97687803fd184ac53aa691bd4e0ec43170
4f48f375a0203b0d09c55522a86300a52da5b24a
2022-02-22T19:09:47Z
python
2022-02-24T19:08:33Z
test/units/module_utils/common/arg_spec/test_validate_invalid.py
# -*- coding: utf-8 -*- # Copyright (c) 2021 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type import pytest from ansible.module_utils.common.arg_spec import ArgumentSpecValidator, ValidationResult from ansible.module_utils.errors import AnsibleValidationErrorMultiple from ansible.module_utils.six import PY2 # Each item is id, argument_spec, parameters, expected, unsupported parameters, error test string INVALID_SPECS = [ ( 'invalid-list', {'packages': {'type': 'list'}}, {'packages': {'key': 'value'}}, {'packages': {'key': 'value'}}, set(), "unable to convert to list: <class 'dict'> cannot be converted to a list", ), ( 'invalid-dict', {'users': {'type': 'dict'}}, {'users': ['one', 'two']}, {'users': ['one', 'two']}, set(), "unable to convert to dict: <class 'list'> cannot be converted to a dict", ), ( 'invalid-bool', {'bool': {'type': 'bool'}}, {'bool': {'k': 'v'}}, {'bool': {'k': 'v'}}, set(), "unable to convert to bool: <class 'dict'> cannot be converted to a bool", ), ( 'invalid-float', {'float': {'type': 'float'}}, {'float': 'hello'}, {'float': 'hello'}, set(), "unable to convert to float: <class 'str'> cannot be converted to a float", ), ( 'invalid-bytes', {'bytes': {'type': 'bytes'}}, {'bytes': 'one'}, {'bytes': 'one'}, set(), "unable to convert to bytes: <class 'str'> cannot be converted to a Byte value", ), ( 'invalid-bits', {'bits': {'type': 'bits'}}, {'bits': 'one'}, {'bits': 'one'}, set(), "unable to convert to bits: <class 'str'> cannot be converted to a Bit value", ), ( 'invalid-jsonargs', {'some_json': {'type': 'jsonarg'}}, {'some_json': set()}, {'some_json': set()}, set(), "unable to convert to jsonarg: <class 'set'> cannot be converted to a json string", ), ( 'invalid-parameter', {'name': {}}, { 'badparam': '', 'another': '', }, { 'name': None, 'badparam': '', 'another': '', }, set(('another', 'badparam')), "another, badparam. Supported parameters include: name.", ), ( 'invalid-elements', {'numbers': {'type': 'list', 'elements': 'int'}}, {'numbers': [55, 33, 34, {'key': 'value'}]}, {'numbers': [55, 33, 34]}, set(), "Elements value for option 'numbers' is of type <class 'dict'> and we were unable to convert to int: <class 'dict'> cannot be converted to an int" ), ( 'required', {'req': {'required': True}}, {}, {'req': None}, set(), "missing required arguments: req" ) ] @pytest.mark.parametrize( ('arg_spec', 'parameters', 'expected', 'unsupported', 'error'), (i[1:] for i in INVALID_SPECS), ids=[i[0] for i in INVALID_SPECS] ) def test_invalid_spec(arg_spec, parameters, expected, unsupported, error): v = ArgumentSpecValidator(arg_spec) result = v.validate(parameters) with pytest.raises(AnsibleValidationErrorMultiple) as exc_info: raise result.errors if PY2: error = error.replace('class', 'type') assert isinstance(result, ValidationResult) assert error in exc_info.value.msg assert error in result.error_messages[0] assert result.unsupported_parameters == unsupported assert result.validated_parameters == expected
closed
ansible/ansible
https://github.com/ansible/ansible
76,786
connection_details.rst uses ansible_user and remote_user interchangeably without explaining the differences (if any) between these settings
### Summary The examples in connection_details.rst in the section Setting a remote user use a mix of `remote_user` and `ansible_user`. It does not explain why it uses one or the other. I couldn't find any official reference saying that they are interchangeable or not. Some reports on StackOverlow indicate that there might be subtle differences. E.g. [Ansible remote_user vs ansible_user](https://stackoverflow.com/questions/36668756/ansible-remote-user-vs-ansible-user). I believe a lot of people would like to know which of these to use or when to use one or the other. ### Issue Type Documentation Report ### Component Name ansible/docs/docsite/rst/user_guide/connection_details.rst ### Ansible Version ```console n/a (using 2.9) ``` ### Configuration ```console n/a ``` ### OS / Environment n/a (RHEL 8) ### Additional Information Having official documentation on this will allow all users to make an informed decision. ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76786
https://github.com/ansible/ansible/pull/77140
dc6b0d48575e0119cdbb0fd7f66c8dd30b414bdb
22ba18d7b0dc76e6f3e567a2f0510f495233dba2
2022-01-18T16:03:26Z
python
2022-02-25T10:59:11Z
docs/docsite/rst/user_guide/connection_details.rst
.. _connections: ****************************** Connection methods and details ****************************** This section shows you how to expand and refine the connection methods Ansible uses for your inventory. ControlPersist and paramiko --------------------------- By default, Ansible uses native OpenSSH, because it supports ControlPersist (a performance feature), Kerberos, and options in ``~/.ssh/config`` such as Jump Host setup. If your control machine uses an older version of OpenSSH that does not support ControlPersist, Ansible will fallback to a Python implementation of OpenSSH called 'paramiko'. .. _connection_set_user: Setting a remote user --------------------- By default, Ansible connects to all remote devices with the user name you are using on the control node. If that user name does not exist on a remote device, you can set a different user name for the connection. If you just need to do some tasks as a different user, look at :ref:`become`. You can set the connection user in a playbook: .. code-block:: yaml --- - name: update webservers hosts: webservers remote_user: admin tasks: - name: thing to do first in this playbook . . . as a host variable in inventory: .. code-block:: text other1.example.com ansible_connection=ssh ansible_user=myuser other2.example.com ansible_connection=ssh ansible_user=myotheruser or as a group variable in inventory: .. code-block:: yaml cloud: hosts: cloud1: my_backup.cloud.com cloud2: my_backup2.cloud.com vars: ansible_user: admin Setting up SSH keys ------------------- By default, Ansible assumes you are using SSH keys to connect to remote machines. SSH keys are encouraged, but you can use password authentication if needed with the ``--ask-pass`` option. If you need to provide a password for :ref:`privilege escalation <become>` (sudo, pbrun, and so on), use ``--ask-become-pass``. .. include:: shared_snippets/SSH_password_prompt.txt To set up SSH agent to avoid retyping passwords, you can do: .. code-block:: bash $ ssh-agent bash $ ssh-add ~/.ssh/id_rsa Depending on your setup, you may wish to use Ansible's ``--private-key`` command line option to specify a pem file instead. You can also add the private key file: .. code-block:: bash $ ssh-agent bash $ ssh-add ~/.ssh/keypair.pem Another way to add private key files without using ssh-agent is using ``ansible_ssh_private_key_file`` in an inventory file as explained here: :ref:`intro_inventory`. Running against localhost ------------------------- You can run commands against the control node by using "localhost" or "127.0.0.1" for the server name: .. code-block:: bash $ ansible localhost -m ping -e 'ansible_python_interpreter="/usr/bin/env python"' You can specify localhost explicitly by adding this to your inventory file: .. code-block:: bash localhost ansible_connection=local ansible_python_interpreter="/usr/bin/env python" .. _host_key_checking_on: Managing host key checking -------------------------- Ansible enables host key checking by default. Checking host keys guards against server spoofing and man-in-the-middle attacks, but it does require some maintenance. If a host is reinstalled and has a different key in 'known_hosts', this will result in an error message until corrected. If a new host is not in 'known_hosts' your control node may prompt for confirmation of the key, which results in an interactive experience if using Ansible, from say, cron. You might not want this. If you understand the implications and wish to disable this behavior, you can do so by editing ``/etc/ansible/ansible.cfg`` or ``~/.ansible.cfg``: .. code-block:: text [defaults] host_key_checking = False Alternatively this can be set by the :envvar:`ANSIBLE_HOST_KEY_CHECKING` environment variable: .. code-block:: bash $ export ANSIBLE_HOST_KEY_CHECKING=False Also note that host key checking in paramiko mode is reasonably slow, therefore switching to 'ssh' is also recommended when using this feature. Other connection methods ------------------------ Ansible can use a variety of connection methods beyond SSH. You can select any connection plugin, including managing things locally and managing chroot, lxc, and jail containers. A mode called 'ansible-pull' can also invert the system and have systems 'phone home' via scheduled git checkouts to pull configuration directives from a central repository.
closed
ansible/ansible
https://github.com/ansible/ansible
77,136
with_first_found: spaces in filenames cause different behaviour
### Summary When I try to import a vars file that contains spaces with `with_first_found` then it behaves differently if the files are passed via `_term` or via `files`. If the list is passed to `files` then the file will not be found and the task fails. ### Issue Type Bug Report ### Component Name first_found ### Ansible Version ```console $ ansible --version ansible [core 2.12.2] config file = /home/user/.ansible.cfg configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible python version = 3.9.10 (main, Jan 17 2022, 00:00:00) [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)] jinja version = 2.11.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed DEFAULT_LOG_PATH(/home/user/.ansible.cfg) = /home/user/ansible.log DEFAULT_STDOUT_CALLBACK(/home/user/.ansible.cfg) = yaml RETRY_FILES_ENABLED(/home/sjakobs/.ansible.cfg) = False ``` ### OS / Environment Fedora 35 ### Steps to Reproduce The following playbook will fail: <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) --- - name: Converge hosts: localhost tasks: - name: load var with space include_vars: "{{ item }}" with_first_found: files: - "vars_with space.yml" - name: show var debug: var: foo ``` But commenting the `files` parameter will cause the playbook to succeed. So the following playbook will succeed: ``` --- - name: Converge hosts: localhost tasks: - name: load var with space include_vars: "{{ item }}" with_first_found: # files: - "vars_with space.yml" - name: show var debug: var: foo ``` ### Expected Results ``` PLAY [Converge] ************************************************************************************************* TASK [Gathering Facts] ***************************************************************************************** ok: [localhost] TASK [load var with space] ************************************************************************************* ok: [localhost] => (item=/home/user/molecule/default/vars_with space.yml) TASK [show var] ************************************************************************************************* ok: [localhost] => foo: bar PLAY RECAP ***************************************************************************************************** localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored= ``` ### Actual Results ```console PLAY [Converge] ********************************************************************************************* TASK [Gathering Facts] ********************************************************************************************* ok: [localhost] TASK [load var with space] ********************************************************************************************* fatal: [localhost]: FAILED! => msg: No file was found when using first_found. PLAY RECAP ********************************************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77136
https://github.com/ansible/ansible/pull/77141
12865139472f0a2fa95b94983dcedb4d57e93b10
74a204e6f144f3eabd6384bbb665b6afd69117c3
2022-02-24T11:03:04Z
python
2022-03-02T21:16:47Z
changelogs/fragments/77136-first_found-spaces-in-names.yml
closed
ansible/ansible
https://github.com/ansible/ansible
77,136
with_first_found: spaces in filenames cause different behaviour
### Summary When I try to import a vars file that contains spaces with `with_first_found` then it behaves differently if the files are passed via `_term` or via `files`. If the list is passed to `files` then the file will not be found and the task fails. ### Issue Type Bug Report ### Component Name first_found ### Ansible Version ```console $ ansible --version ansible [core 2.12.2] config file = /home/user/.ansible.cfg configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible python version = 3.9.10 (main, Jan 17 2022, 00:00:00) [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)] jinja version = 2.11.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed DEFAULT_LOG_PATH(/home/user/.ansible.cfg) = /home/user/ansible.log DEFAULT_STDOUT_CALLBACK(/home/user/.ansible.cfg) = yaml RETRY_FILES_ENABLED(/home/sjakobs/.ansible.cfg) = False ``` ### OS / Environment Fedora 35 ### Steps to Reproduce The following playbook will fail: <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) --- - name: Converge hosts: localhost tasks: - name: load var with space include_vars: "{{ item }}" with_first_found: files: - "vars_with space.yml" - name: show var debug: var: foo ``` But commenting the `files` parameter will cause the playbook to succeed. So the following playbook will succeed: ``` --- - name: Converge hosts: localhost tasks: - name: load var with space include_vars: "{{ item }}" with_first_found: # files: - "vars_with space.yml" - name: show var debug: var: foo ``` ### Expected Results ``` PLAY [Converge] ************************************************************************************************* TASK [Gathering Facts] ***************************************************************************************** ok: [localhost] TASK [load var with space] ************************************************************************************* ok: [localhost] => (item=/home/user/molecule/default/vars_with space.yml) TASK [show var] ************************************************************************************************* ok: [localhost] => foo: bar PLAY RECAP ***************************************************************************************************** localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored= ``` ### Actual Results ```console PLAY [Converge] ********************************************************************************************* TASK [Gathering Facts] ********************************************************************************************* ok: [localhost] TASK [load var with space] ********************************************************************************************* fatal: [localhost]: FAILED! => msg: No file was found when using first_found. PLAY RECAP ********************************************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77136
https://github.com/ansible/ansible/pull/77141
12865139472f0a2fa95b94983dcedb4d57e93b10
74a204e6f144f3eabd6384bbb665b6afd69117c3
2022-02-24T11:03:04Z
python
2022-03-02T21:16:47Z
lib/ansible/plugins/lookup/first_found.py
# (c) 2013, seth vidal <[email protected]> red hat, inc # (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type DOCUMENTATION = """ name: first_found author: Seth Vidal (!UNKNOWN) <[email protected]> version_added: historical short_description: return first file found from list description: - This lookup checks a list of files and paths and returns the full path to the first combination found. - As all lookups, when fed relative paths it will try use the current task's location first and go up the chain to the containing locations of role / play / include and so on. - The list of files has precedence over the paths searched. For example, A task in a role has a 'file1' in the play's relative path, this will be used, 'file2' in role's relative path will not. - Either a list of files C(_terms) or a key C(files) with a list of files is required for this plugin to operate. notes: - This lookup can be used in 'dual mode', either passing a list of file names or a dictionary that has C(files) and C(paths). options: _terms: description: A list of file names. files: description: A list of file names. type: list default: [] paths: description: A list of paths in which to look for the files. type: list default: [] skip: type: boolean default: False description: - When C(True), return an empty list when no files are matched. - This is useful when used with C(with_first_found), as an empty list return to C(with_) calls causes the calling task to be skipped. - When used as a template via C(lookup) or C(query), setting I(skip=True) will *not* cause the task to skip. Tasks must handle the empty list return from the template. - When C(False) and C(lookup) or C(query) specifies I(errors='ignore') all errors (including no file found, but potentially others) return an empty string or an empty list respectively. - When C(True) and C(lookup) or C(query) specifies I(errors='ignore'), no file found will return an empty list and other potential errors return an empty string or empty list depending on the template call (in other words return values of C(lookup) v C(query)). """ EXAMPLES = """ - name: Set _found_file to the first existing file, raising an error if a file is not found ansible.builtin.set_fact: _found_file: "{{ lookup('ansible.builtin.first_found', findme) }}" vars: findme: - /path/to/foo.txt - bar.txt # will be looked in files/ dir relative to role and/or play - /path/to/biz.txt - name: Set _found_file to the first existing file, or an empty list if no files found ansible.builtin.set_fact: _found_file: "{{ lookup('ansible.builtin.first_found', files, paths=['/extra/path'], skip=True) }}" vars: files: - /path/to/foo.txt - /path/to/bar.txt - name: Include tasks only if one of the files exist, otherwise skip the task ansible.builtin.include_tasks: file: "{{ item }}" with_first_found: files: - path/tasks.yaml - path/other_tasks.yaml skip: True - name: Include tasks only if one of the files exists, otherwise skip ansible.builtin.include_tasks: '{{ tasks_file }}' when: tasks_file != "" vars: tasks_file: "{{ lookup('ansible.builtin.first_found', files=['tasks.yaml', 'other_tasks.yaml'], errors='ignore') }}" - name: | copy first existing file found to /some/file, looking in relative directories from where the task is defined and including any play objects that contain it ansible.builtin.copy: src: "{{ lookup('ansible.builtin.first_found', findme) }}" dest: /some/file vars: findme: - foo - "{{ inventory_hostname }}" - bar - name: same copy but specific paths ansible.builtin.copy: src: "{{ lookup('ansible.builtin.first_found', params) }}" dest: /some/file vars: params: files: - foo - "{{ inventory_hostname }}" - bar paths: - /tmp/production - /tmp/staging - name: INTERFACES | Create Ansible header for /etc/network/interfaces ansible.builtin.template: src: "{{ lookup('ansible.builtin.first_found', findme)}}" dest: "/etc/foo.conf" vars: findme: - "{{ ansible_virtualization_type }}_foo.conf" - "default_foo.conf" - name: read vars from first file found, use 'vars/' relative subdir ansible.builtin.include_vars: "{{lookup('ansible.builtin.first_found', params)}}" vars: params: files: - '{{ ansible_distribution }}.yml' - '{{ ansible_os_family }}.yml' - default.yml paths: - 'vars' """ RETURN = """ _raw: description: - path to file found type: list elements: path """ import os from jinja2.exceptions import UndefinedError from ansible.errors import AnsibleLookupError, AnsibleUndefinedVariable from ansible.module_utils.common._collections_compat import Mapping, Sequence from ansible.module_utils.six import string_types from ansible.plugins.lookup import LookupBase def _split_on(terms, spliters=','): # TODO: fix as it does not allow spaces in names termlist = [] if isinstance(terms, string_types): for spliter in spliters: terms = terms.replace(spliter, ' ') termlist = terms.split(' ') else: # added since options will already listify for t in terms: termlist.extend(_split_on(t, spliters)) return termlist class LookupModule(LookupBase): def _process_terms(self, terms, variables, kwargs): total_search = [] skip = False # can use a dict instead of list item to pass inline config for term in terms: if isinstance(term, Mapping): self.set_options(var_options=variables, direct=term) elif isinstance(term, string_types): self.set_options(var_options=variables, direct=kwargs) elif isinstance(term, Sequence): partial, skip = self._process_terms(term, variables, kwargs) total_search.extend(partial) continue else: raise AnsibleLookupError("Invalid term supplied, can handle string, mapping or list of strings but got: %s for %s" % (type(term), term)) files = self.get_option('files') paths = self.get_option('paths') # NOTE: this is used as 'global' but can be set many times?!?!? skip = self.get_option('skip') # magic extra spliting to create lists filelist = _split_on(files, ',;') pathlist = _split_on(paths, ',:;') # create search structure if pathlist: for path in pathlist: for fn in filelist: f = os.path.join(path, fn) total_search.append(f) elif filelist: # NOTE: this seems wrong, should be 'extend' as any option/entry can clobber all total_search = filelist else: total_search.append(term) return total_search, skip def run(self, terms, variables, **kwargs): total_search, skip = self._process_terms(terms, variables, kwargs) # NOTE: during refactor noticed that the 'using a dict' as term # is designed to only work with 'one' otherwise inconsistencies will appear. # see other notes below. # actually search subdir = getattr(self, '_subdir', 'files') path = None for fn in total_search: try: fn = self._templar.template(fn) except (AnsibleUndefinedVariable, UndefinedError): continue # get subdir if set by task executor, default to files otherwise path = self.find_file_in_search_path(variables, subdir, fn, ignore_missing=True) # exit if we find one! if path is not None: return [path] # if we get here, no file was found if skip: # NOTE: global skip wont matter, only last 'skip' value in dict term return [] raise AnsibleLookupError("No file was found when using first_found.")
closed
ansible/ansible
https://github.com/ansible/ansible
77,136
with_first_found: spaces in filenames cause different behaviour
### Summary When I try to import a vars file that contains spaces with `with_first_found` then it behaves differently if the files are passed via `_term` or via `files`. If the list is passed to `files` then the file will not be found and the task fails. ### Issue Type Bug Report ### Component Name first_found ### Ansible Version ```console $ ansible --version ansible [core 2.12.2] config file = /home/user/.ansible.cfg configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible python version = 3.9.10 (main, Jan 17 2022, 00:00:00) [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)] jinja version = 2.11.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed DEFAULT_LOG_PATH(/home/user/.ansible.cfg) = /home/user/ansible.log DEFAULT_STDOUT_CALLBACK(/home/user/.ansible.cfg) = yaml RETRY_FILES_ENABLED(/home/sjakobs/.ansible.cfg) = False ``` ### OS / Environment Fedora 35 ### Steps to Reproduce The following playbook will fail: <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) --- - name: Converge hosts: localhost tasks: - name: load var with space include_vars: "{{ item }}" with_first_found: files: - "vars_with space.yml" - name: show var debug: var: foo ``` But commenting the `files` parameter will cause the playbook to succeed. So the following playbook will succeed: ``` --- - name: Converge hosts: localhost tasks: - name: load var with space include_vars: "{{ item }}" with_first_found: # files: - "vars_with space.yml" - name: show var debug: var: foo ``` ### Expected Results ``` PLAY [Converge] ************************************************************************************************* TASK [Gathering Facts] ***************************************************************************************** ok: [localhost] TASK [load var with space] ************************************************************************************* ok: [localhost] => (item=/home/user/molecule/default/vars_with space.yml) TASK [show var] ************************************************************************************************* ok: [localhost] => foo: bar PLAY RECAP ***************************************************************************************************** localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored= ``` ### Actual Results ```console PLAY [Converge] ********************************************************************************************* TASK [Gathering Facts] ********************************************************************************************* ok: [localhost] TASK [load var with space] ********************************************************************************************* fatal: [localhost]: FAILED! => msg: No file was found when using first_found. PLAY RECAP ********************************************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77136
https://github.com/ansible/ansible/pull/77141
12865139472f0a2fa95b94983dcedb4d57e93b10
74a204e6f144f3eabd6384bbb665b6afd69117c3
2022-02-24T11:03:04Z
python
2022-03-02T21:16:47Z
test/integration/targets/lookup_first_found/files/vars file spaces.yml
closed
ansible/ansible
https://github.com/ansible/ansible
77,136
with_first_found: spaces in filenames cause different behaviour
### Summary When I try to import a vars file that contains spaces with `with_first_found` then it behaves differently if the files are passed via `_term` or via `files`. If the list is passed to `files` then the file will not be found and the task fails. ### Issue Type Bug Report ### Component Name first_found ### Ansible Version ```console $ ansible --version ansible [core 2.12.2] config file = /home/user/.ansible.cfg configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible python version = 3.9.10 (main, Jan 17 2022, 00:00:00) [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)] jinja version = 2.11.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed DEFAULT_LOG_PATH(/home/user/.ansible.cfg) = /home/user/ansible.log DEFAULT_STDOUT_CALLBACK(/home/user/.ansible.cfg) = yaml RETRY_FILES_ENABLED(/home/sjakobs/.ansible.cfg) = False ``` ### OS / Environment Fedora 35 ### Steps to Reproduce The following playbook will fail: <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) --- - name: Converge hosts: localhost tasks: - name: load var with space include_vars: "{{ item }}" with_first_found: files: - "vars_with space.yml" - name: show var debug: var: foo ``` But commenting the `files` parameter will cause the playbook to succeed. So the following playbook will succeed: ``` --- - name: Converge hosts: localhost tasks: - name: load var with space include_vars: "{{ item }}" with_first_found: # files: - "vars_with space.yml" - name: show var debug: var: foo ``` ### Expected Results ``` PLAY [Converge] ************************************************************************************************* TASK [Gathering Facts] ***************************************************************************************** ok: [localhost] TASK [load var with space] ************************************************************************************* ok: [localhost] => (item=/home/user/molecule/default/vars_with space.yml) TASK [show var] ************************************************************************************************* ok: [localhost] => foo: bar PLAY RECAP ***************************************************************************************************** localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored= ``` ### Actual Results ```console PLAY [Converge] ********************************************************************************************* TASK [Gathering Facts] ********************************************************************************************* ok: [localhost] TASK [load var with space] ********************************************************************************************* fatal: [localhost]: FAILED! => msg: No file was found when using first_found. PLAY RECAP ********************************************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77136
https://github.com/ansible/ansible/pull/77141
12865139472f0a2fa95b94983dcedb4d57e93b10
74a204e6f144f3eabd6384bbb665b6afd69117c3
2022-02-24T11:03:04Z
python
2022-03-02T21:16:47Z
test/integration/targets/lookup_first_found/tasks/main.yml
- name: test with_first_found set_fact: "first_found={{ item }}" with_first_found: - "does_not_exist" - "foo1" - "{{ role_path + '/files/bar1' }}" # will only hit this if dwim search is broken - name: set expected set_fact: first_expected="{{ role_path + '/files/foo1' }}" - name: set unexpected set_fact: first_unexpected="{{ role_path + '/files/bar1' }}" - name: verify with_first_found results assert: that: - "first_found == first_expected" - "first_found != first_unexpected" - name: test q(first_found) with no files produces empty list set_fact: first_found_var: "{{ q('first_found', params, errors='ignore') }}" vars: params: files: "not_a_file.yaml" skip: True - name: verify q(first_found) result assert: that: - "first_found_var == []" - name: test lookup(first_found) with no files produces empty string set_fact: first_found_var: "{{ lookup('first_found', params, errors='ignore') }}" vars: params: files: "not_a_file.yaml" - name: verify lookup(first_found) result assert: that: - "first_found_var == ''" # NOTE: skip: True deprecated e17a2b502d6601be53c60d7ba1c627df419460c9, remove 2.12 - name: test first_found with no matches and skip=True does nothing set_fact: "this_not_set={{ item }}" vars: params: files: - not/a/file.yaml - another/non/file.yaml skip: True loop: "{{ q('first_found', params) }}" - name: verify skip assert: that: - "this_not_set is not defined" - name: test first_found with no matches and errors='ignore' skips in a loop set_fact: "this_not_set={{ item }}" vars: params: files: - not/a/file.yaml - another/non/file.yaml loop: "{{ query('first_found', params, errors='ignore') }}" - name: verify errors=ignore assert: that: - "this_not_set is not defined" - name: test legacy formats set_fact: hatethisformat={{item}} vars: params: files: not/a/file.yaml;hosts paths: not/a/path:/etc loop: "{{ q('first_found', params) }}" - name: verify /etc/hosts was found assert: that: - "hatethisformat == '/etc/hosts'"
closed
ansible/ansible
https://github.com/ansible/ansible
76,651
lookup first_found fails if search path contains space
### Summary lookup first_found fails when path is configured and contains spaces. Wasn't the issue with 2.9, started to occur with 2.12. ### Issue Type Bug Report ### Component Name first_found ### Ansible Version ```console $ ansible --version ansible [core 2.12.1] config file = None configured module search path = ['/Users/afunix/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/Cellar/ansible/5.0.1/libexec/lib/python3.10/site-packages/ansible ansible collection location = /Users/afunix/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible python version = 3.10.1 (main, Dec 6 2021, 22:25:40) [Clang 13.0.0 (clang-1300.0.29.3)] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment MacOS 12.1 Ubuntu 20.04 ### Steps to Reproduce hosts ```yaml localhost ansible_connection=local ``` playbook.yml ```yaml --- - hosts: all roles: - test ``` roles/test/tasks/main.yml ```yaml --- - include_vars: "{{lookup('first_found', params)}}" vars: params: files: - "test_vars.yml" paths: - "{{role_path}}/vars" ``` roles/test/vars/test_vars.yml ```yaml --- ``` ### Expected Results ansible-playbook executes successfully ### Actual Results ```console afunix@blake ~/tmp/bug $ pwd /Users/afunix/tmp/bug afunix@blake ~/tmp/bug $ ansible-playbook -Di hosts playbook.yml PLAY [all] ********************************************************************************************************************************************************************************************************************************************************************* TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************************************************* [WARNING]: Platform darwin on host localhost is using the discovered Python interpreter at /usr/local/bin/python3.9, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.12/reference_appendices/interpreter_discovery.html for more information. ok: [localhost] TASK [test : include_vars] ***************************************************************************************************************************************************************************************************************************************************** ok: [localhost] PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 afunix@blake ~/tmp/bug with spaces $ pwd /Users/afunix/tmp/bug with spaces afunix@blake ~/tmp/bug with spaces $ ansible-playbook -Di hosts playbook.yml PLAY [all] ********************************************************************************************************************************************************************************************************************************************************************* TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************************************************* [WARNING]: Platform darwin on host localhost is using the discovered Python interpreter at /usr/local/bin/python3.9, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.12/reference_appendices/interpreter_discovery.html for more information. ok: [localhost] TASK [test : include_vars] ***************************************************************************************************************************************************************************************************************************************************** fatal: [localhost]: FAILED! => {"msg": "No file was found when using first_found."} PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 afunix@blake ~/tmp/bug with spaces $ ansible-playbook -Di hosts playbook.yml -vvvvv ansible-playbook [core 2.12.1] config file = None configured module search path = ['/Users/afunix/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/Cellar/ansible/5.0.1/libexec/lib/python3.10/site-packages/ansible ansible collection location = /Users/afunix/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible-playbook python version = 3.10.1 (main, Dec 6 2021, 22:25:40) [Clang 13.0.0 (clang-1300.0.29.3)] jinja version = 3.0.3 libyaml = True No config file found; using defaults Reading vault password file: /Users/afunix/.vault setting up inventory plugins host_list declined parsing /Users/afunix/tmp/bug with spaces/hosts as it did not pass its verify_file() method script declined parsing /Users/afunix/tmp/bug with spaces/hosts as it did not pass its verify_file() method auto declined parsing /Users/afunix/tmp/bug with spaces/hosts as it did not pass its verify_file() method Set default localhost to localhost Parsed /Users/afunix/tmp/bug with spaces/hosts inventory source with ini plugin Loading callback plugin default of type stdout, v2.0 from /usr/local/Cellar/ansible/5.0.1/libexec/lib/python3.10/site-packages/ansible/plugins/callback/default.py Attempting to use 'default' callback. Skipping callback 'default', as we already have a stdout callback. Attempting to use 'junit' callback. Attempting to use 'minimal' callback. Skipping callback 'minimal', as we already have a stdout callback. Attempting to use 'oneline' callback. Skipping callback 'oneline', as we already have a stdout callback. Attempting to use 'tree' callback. PLAYBOOK: playbook.yml ********************************************************************************************************************************************************************************************************************************************************* Positional arguments: playbook.yml verbosity: 5 connection: smart timeout: 10 become_method: sudo tags: ('all',) diff: True inventory: ('/Users/afunix/tmp/bug with spaces/hosts',) forks: 5 1 plays in playbook.yml PLAY [all] ********************************************************************************************************************************************************************************************************************************************************************* TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************************************************* task path: /Users/afunix/tmp/bug with spaces/playbook.yml:2 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: afunix <localhost> EXEC /bin/sh -c 'echo ~afunix && sleep 0' <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/afunix/.ansible/tmp `"&& mkdir "` echo /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872 `" && echo ansible-tmp-1641344178.921433-70880-92695461947872="` echo /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872 `" ) && sleep 0' Including module_utils file ansible/__init__.py Including module_utils file ansible/module_utils/__init__.py Including module_utils file ansible/module_utils/_text.py Including module_utils file ansible/module_utils/basic.py Including module_utils file ansible/module_utils/common/_collections_compat.py Including module_utils file ansible/module_utils/common/__init__.py Including module_utils file ansible/module_utils/common/_json_compat.py Including module_utils file ansible/module_utils/common/_utils.py Including module_utils file ansible/module_utils/common/arg_spec.py Including module_utils file ansible/module_utils/common/file.py Including module_utils file ansible/module_utils/common/locale.py Including module_utils file ansible/module_utils/common/parameters.py Including module_utils file ansible/module_utils/common/collections.py Including module_utils file ansible/module_utils/common/process.py Including module_utils file ansible/module_utils/common/sys_info.py Including module_utils file ansible/module_utils/common/text/converters.py Including module_utils file ansible/module_utils/common/text/__init__.py Including module_utils file ansible/module_utils/common/text/formatters.py Including module_utils file ansible/module_utils/common/validation.py Including module_utils file ansible/module_utils/common/warnings.py Including module_utils file ansible/module_utils/compat/selectors.py Including module_utils file ansible/module_utils/compat/__init__.py Including module_utils file ansible/module_utils/compat/_selectors2.py Including module_utils file ansible/module_utils/compat/selinux.py Including module_utils file ansible/module_utils/distro/__init__.py Including module_utils file ansible/module_utils/distro/_distro.py Including module_utils file ansible/module_utils/errors.py Including module_utils file ansible/module_utils/facts/ansible_collector.py Including module_utils file ansible/module_utils/facts/__init__.py Including module_utils file ansible/module_utils/facts/collector.py Including module_utils file ansible/module_utils/facts/compat.py Including module_utils file ansible/module_utils/facts/default_collectors.py Including module_utils file ansible/module_utils/facts/hardware/aix.py Including module_utils file ansible/module_utils/facts/hardware/__init__.py Including module_utils file ansible/module_utils/facts/hardware/base.py Including module_utils file ansible/module_utils/facts/hardware/darwin.py Including module_utils file ansible/module_utils/facts/hardware/dragonfly.py Including module_utils file ansible/module_utils/facts/hardware/freebsd.py Including module_utils file ansible/module_utils/facts/hardware/hpux.py Including module_utils file ansible/module_utils/facts/hardware/hurd.py Including module_utils file ansible/module_utils/facts/hardware/linux.py Including module_utils file ansible/module_utils/facts/hardware/netbsd.py Including module_utils file ansible/module_utils/facts/hardware/openbsd.py Including module_utils file ansible/module_utils/facts/hardware/sunos.py Including module_utils file ansible/module_utils/facts/namespace.py Including module_utils file ansible/module_utils/facts/network/aix.py Including module_utils file ansible/module_utils/facts/network/__init__.py Including module_utils file ansible/module_utils/facts/network/base.py Including module_utils file ansible/module_utils/facts/network/darwin.py Including module_utils file ansible/module_utils/facts/network/dragonfly.py Including module_utils file ansible/module_utils/facts/network/fc_wwn.py Including module_utils file ansible/module_utils/facts/network/freebsd.py Including module_utils file ansible/module_utils/facts/network/generic_bsd.py Including module_utils file ansible/module_utils/facts/network/hpux.py Including module_utils file ansible/module_utils/facts/network/hurd.py Including module_utils file ansible/module_utils/facts/network/iscsi.py Including module_utils file ansible/module_utils/facts/network/linux.py Including module_utils file ansible/module_utils/facts/network/netbsd.py Including module_utils file ansible/module_utils/facts/network/nvme.py Including module_utils file ansible/module_utils/facts/network/openbsd.py Including module_utils file ansible/module_utils/facts/network/sunos.py Including module_utils file ansible/module_utils/facts/other/facter.py Including module_utils file ansible/module_utils/facts/other/__init__.py Including module_utils file ansible/module_utils/facts/other/ohai.py Including module_utils file ansible/module_utils/facts/sysctl.py Including module_utils file ansible/module_utils/facts/system/apparmor.py Including module_utils file ansible/module_utils/facts/system/__init__.py Including module_utils file ansible/module_utils/facts/system/caps.py Including module_utils file ansible/module_utils/facts/system/chroot.py Including module_utils file ansible/module_utils/facts/system/cmdline.py Including module_utils file ansible/module_utils/facts/system/date_time.py Including module_utils file ansible/module_utils/facts/system/distribution.py Including module_utils file ansible/module_utils/facts/system/dns.py Including module_utils file ansible/module_utils/facts/system/env.py Including module_utils file ansible/module_utils/facts/system/fips.py Including module_utils file ansible/module_utils/facts/system/local.py Including module_utils file ansible/module_utils/facts/system/lsb.py Including module_utils file ansible/module_utils/facts/system/pkg_mgr.py Including module_utils file ansible/module_utils/facts/system/platform.py Including module_utils file ansible/module_utils/facts/system/python.py Including module_utils file ansible/module_utils/facts/system/selinux.py Including module_utils file ansible/module_utils/facts/system/service_mgr.py Including module_utils file ansible/module_utils/compat/version.py Including module_utils file ansible/module_utils/facts/system/ssh_pub_keys.py Including module_utils file ansible/module_utils/facts/system/user.py Including module_utils file ansible/module_utils/facts/timeout.py Including module_utils file ansible/module_utils/facts/utils.py Including module_utils file ansible/module_utils/facts/virtual/base.py Including module_utils file ansible/module_utils/facts/virtual/__init__.py Including module_utils file ansible/module_utils/facts/virtual/dragonfly.py Including module_utils file ansible/module_utils/facts/virtual/freebsd.py Including module_utils file ansible/module_utils/facts/virtual/hpux.py Including module_utils file ansible/module_utils/facts/virtual/linux.py Including module_utils file ansible/module_utils/facts/virtual/netbsd.py Including module_utils file ansible/module_utils/facts/virtual/openbsd.py Including module_utils file ansible/module_utils/facts/virtual/sunos.py Including module_utils file ansible/module_utils/facts/virtual/sysctl.py Including module_utils file ansible/module_utils/parsing/convert_bool.py Including module_utils file ansible/module_utils/parsing/__init__.py Including module_utils file ansible/module_utils/pycompat24.py Including module_utils file ansible/module_utils/six/__init__.py <localhost> Attempting python interpreter discovery <localhost> EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'python3.10'"'"'; command -v '"'"'python3.9'"'"'; command -v '"'"'python3.8'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'python2.6'"'"'; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0' <localhost> Python interpreter discovery fallback (unsupported platform for extended discovery: darwin) Using module file /usr/local/Cellar/ansible/5.0.1/libexec/lib/python3.10/site-packages/ansible/modules/setup.py <localhost> PUT /Users/afunix/.ansible/tmp/ansible-local-70877y8t3c9b_/tmp3he6rmv3 TO /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872/AnsiballZ_setup.py <localhost> EXEC /bin/sh -c 'chmod u+x /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872/ /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872/AnsiballZ_setup.py && sleep 0' <localhost> EXEC /bin/sh -c '/usr/local/bin/python3.9 /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872/AnsiballZ_setup.py && sleep 0' <localhost> EXEC /bin/sh -c 'rm -f -r /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872/ > /dev/null 2>&1 && sleep 0' [WARNING]: Platform darwin on host localhost is using the discovered Python interpreter at /usr/local/bin/python3.9, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.12/reference_appendices/interpreter_discovery.html for more information. ok: [localhost] META: ran handlers TASK [test : include_vars] ***************************************************************************************************************************************************************************************************************************************************** task path: /Users/afunix/tmp/bug with spaces/roles/test/tasks/main.yml:2 looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/files/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/tasks/files/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/tasks/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/files/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/files/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/with/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/files/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/tasks/files/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/tasks/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/files/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/files/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/spaces/roles/test/vars/test_vars.yml" fatal: [localhost]: FAILED! => { "msg": "No file was found when using first_found." } PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76651
https://github.com/ansible/ansible/pull/77141
12865139472f0a2fa95b94983dcedb4d57e93b10
74a204e6f144f3eabd6384bbb665b6afd69117c3
2022-01-05T00:58:49Z
python
2022-03-02T21:16:47Z
changelogs/fragments/77136-first_found-spaces-in-names.yml
closed
ansible/ansible
https://github.com/ansible/ansible
76,651
lookup first_found fails if search path contains space
### Summary lookup first_found fails when path is configured and contains spaces. Wasn't the issue with 2.9, started to occur with 2.12. ### Issue Type Bug Report ### Component Name first_found ### Ansible Version ```console $ ansible --version ansible [core 2.12.1] config file = None configured module search path = ['/Users/afunix/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/Cellar/ansible/5.0.1/libexec/lib/python3.10/site-packages/ansible ansible collection location = /Users/afunix/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible python version = 3.10.1 (main, Dec 6 2021, 22:25:40) [Clang 13.0.0 (clang-1300.0.29.3)] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment MacOS 12.1 Ubuntu 20.04 ### Steps to Reproduce hosts ```yaml localhost ansible_connection=local ``` playbook.yml ```yaml --- - hosts: all roles: - test ``` roles/test/tasks/main.yml ```yaml --- - include_vars: "{{lookup('first_found', params)}}" vars: params: files: - "test_vars.yml" paths: - "{{role_path}}/vars" ``` roles/test/vars/test_vars.yml ```yaml --- ``` ### Expected Results ansible-playbook executes successfully ### Actual Results ```console afunix@blake ~/tmp/bug $ pwd /Users/afunix/tmp/bug afunix@blake ~/tmp/bug $ ansible-playbook -Di hosts playbook.yml PLAY [all] ********************************************************************************************************************************************************************************************************************************************************************* TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************************************************* [WARNING]: Platform darwin on host localhost is using the discovered Python interpreter at /usr/local/bin/python3.9, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.12/reference_appendices/interpreter_discovery.html for more information. ok: [localhost] TASK [test : include_vars] ***************************************************************************************************************************************************************************************************************************************************** ok: [localhost] PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 afunix@blake ~/tmp/bug with spaces $ pwd /Users/afunix/tmp/bug with spaces afunix@blake ~/tmp/bug with spaces $ ansible-playbook -Di hosts playbook.yml PLAY [all] ********************************************************************************************************************************************************************************************************************************************************************* TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************************************************* [WARNING]: Platform darwin on host localhost is using the discovered Python interpreter at /usr/local/bin/python3.9, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.12/reference_appendices/interpreter_discovery.html for more information. ok: [localhost] TASK [test : include_vars] ***************************************************************************************************************************************************************************************************************************************************** fatal: [localhost]: FAILED! => {"msg": "No file was found when using first_found."} PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 afunix@blake ~/tmp/bug with spaces $ ansible-playbook -Di hosts playbook.yml -vvvvv ansible-playbook [core 2.12.1] config file = None configured module search path = ['/Users/afunix/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/Cellar/ansible/5.0.1/libexec/lib/python3.10/site-packages/ansible ansible collection location = /Users/afunix/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible-playbook python version = 3.10.1 (main, Dec 6 2021, 22:25:40) [Clang 13.0.0 (clang-1300.0.29.3)] jinja version = 3.0.3 libyaml = True No config file found; using defaults Reading vault password file: /Users/afunix/.vault setting up inventory plugins host_list declined parsing /Users/afunix/tmp/bug with spaces/hosts as it did not pass its verify_file() method script declined parsing /Users/afunix/tmp/bug with spaces/hosts as it did not pass its verify_file() method auto declined parsing /Users/afunix/tmp/bug with spaces/hosts as it did not pass its verify_file() method Set default localhost to localhost Parsed /Users/afunix/tmp/bug with spaces/hosts inventory source with ini plugin Loading callback plugin default of type stdout, v2.0 from /usr/local/Cellar/ansible/5.0.1/libexec/lib/python3.10/site-packages/ansible/plugins/callback/default.py Attempting to use 'default' callback. Skipping callback 'default', as we already have a stdout callback. Attempting to use 'junit' callback. Attempting to use 'minimal' callback. Skipping callback 'minimal', as we already have a stdout callback. Attempting to use 'oneline' callback. Skipping callback 'oneline', as we already have a stdout callback. Attempting to use 'tree' callback. PLAYBOOK: playbook.yml ********************************************************************************************************************************************************************************************************************************************************* Positional arguments: playbook.yml verbosity: 5 connection: smart timeout: 10 become_method: sudo tags: ('all',) diff: True inventory: ('/Users/afunix/tmp/bug with spaces/hosts',) forks: 5 1 plays in playbook.yml PLAY [all] ********************************************************************************************************************************************************************************************************************************************************************* TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************************************************* task path: /Users/afunix/tmp/bug with spaces/playbook.yml:2 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: afunix <localhost> EXEC /bin/sh -c 'echo ~afunix && sleep 0' <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/afunix/.ansible/tmp `"&& mkdir "` echo /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872 `" && echo ansible-tmp-1641344178.921433-70880-92695461947872="` echo /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872 `" ) && sleep 0' Including module_utils file ansible/__init__.py Including module_utils file ansible/module_utils/__init__.py Including module_utils file ansible/module_utils/_text.py Including module_utils file ansible/module_utils/basic.py Including module_utils file ansible/module_utils/common/_collections_compat.py Including module_utils file ansible/module_utils/common/__init__.py Including module_utils file ansible/module_utils/common/_json_compat.py Including module_utils file ansible/module_utils/common/_utils.py Including module_utils file ansible/module_utils/common/arg_spec.py Including module_utils file ansible/module_utils/common/file.py Including module_utils file ansible/module_utils/common/locale.py Including module_utils file ansible/module_utils/common/parameters.py Including module_utils file ansible/module_utils/common/collections.py Including module_utils file ansible/module_utils/common/process.py Including module_utils file ansible/module_utils/common/sys_info.py Including module_utils file ansible/module_utils/common/text/converters.py Including module_utils file ansible/module_utils/common/text/__init__.py Including module_utils file ansible/module_utils/common/text/formatters.py Including module_utils file ansible/module_utils/common/validation.py Including module_utils file ansible/module_utils/common/warnings.py Including module_utils file ansible/module_utils/compat/selectors.py Including module_utils file ansible/module_utils/compat/__init__.py Including module_utils file ansible/module_utils/compat/_selectors2.py Including module_utils file ansible/module_utils/compat/selinux.py Including module_utils file ansible/module_utils/distro/__init__.py Including module_utils file ansible/module_utils/distro/_distro.py Including module_utils file ansible/module_utils/errors.py Including module_utils file ansible/module_utils/facts/ansible_collector.py Including module_utils file ansible/module_utils/facts/__init__.py Including module_utils file ansible/module_utils/facts/collector.py Including module_utils file ansible/module_utils/facts/compat.py Including module_utils file ansible/module_utils/facts/default_collectors.py Including module_utils file ansible/module_utils/facts/hardware/aix.py Including module_utils file ansible/module_utils/facts/hardware/__init__.py Including module_utils file ansible/module_utils/facts/hardware/base.py Including module_utils file ansible/module_utils/facts/hardware/darwin.py Including module_utils file ansible/module_utils/facts/hardware/dragonfly.py Including module_utils file ansible/module_utils/facts/hardware/freebsd.py Including module_utils file ansible/module_utils/facts/hardware/hpux.py Including module_utils file ansible/module_utils/facts/hardware/hurd.py Including module_utils file ansible/module_utils/facts/hardware/linux.py Including module_utils file ansible/module_utils/facts/hardware/netbsd.py Including module_utils file ansible/module_utils/facts/hardware/openbsd.py Including module_utils file ansible/module_utils/facts/hardware/sunos.py Including module_utils file ansible/module_utils/facts/namespace.py Including module_utils file ansible/module_utils/facts/network/aix.py Including module_utils file ansible/module_utils/facts/network/__init__.py Including module_utils file ansible/module_utils/facts/network/base.py Including module_utils file ansible/module_utils/facts/network/darwin.py Including module_utils file ansible/module_utils/facts/network/dragonfly.py Including module_utils file ansible/module_utils/facts/network/fc_wwn.py Including module_utils file ansible/module_utils/facts/network/freebsd.py Including module_utils file ansible/module_utils/facts/network/generic_bsd.py Including module_utils file ansible/module_utils/facts/network/hpux.py Including module_utils file ansible/module_utils/facts/network/hurd.py Including module_utils file ansible/module_utils/facts/network/iscsi.py Including module_utils file ansible/module_utils/facts/network/linux.py Including module_utils file ansible/module_utils/facts/network/netbsd.py Including module_utils file ansible/module_utils/facts/network/nvme.py Including module_utils file ansible/module_utils/facts/network/openbsd.py Including module_utils file ansible/module_utils/facts/network/sunos.py Including module_utils file ansible/module_utils/facts/other/facter.py Including module_utils file ansible/module_utils/facts/other/__init__.py Including module_utils file ansible/module_utils/facts/other/ohai.py Including module_utils file ansible/module_utils/facts/sysctl.py Including module_utils file ansible/module_utils/facts/system/apparmor.py Including module_utils file ansible/module_utils/facts/system/__init__.py Including module_utils file ansible/module_utils/facts/system/caps.py Including module_utils file ansible/module_utils/facts/system/chroot.py Including module_utils file ansible/module_utils/facts/system/cmdline.py Including module_utils file ansible/module_utils/facts/system/date_time.py Including module_utils file ansible/module_utils/facts/system/distribution.py Including module_utils file ansible/module_utils/facts/system/dns.py Including module_utils file ansible/module_utils/facts/system/env.py Including module_utils file ansible/module_utils/facts/system/fips.py Including module_utils file ansible/module_utils/facts/system/local.py Including module_utils file ansible/module_utils/facts/system/lsb.py Including module_utils file ansible/module_utils/facts/system/pkg_mgr.py Including module_utils file ansible/module_utils/facts/system/platform.py Including module_utils file ansible/module_utils/facts/system/python.py Including module_utils file ansible/module_utils/facts/system/selinux.py Including module_utils file ansible/module_utils/facts/system/service_mgr.py Including module_utils file ansible/module_utils/compat/version.py Including module_utils file ansible/module_utils/facts/system/ssh_pub_keys.py Including module_utils file ansible/module_utils/facts/system/user.py Including module_utils file ansible/module_utils/facts/timeout.py Including module_utils file ansible/module_utils/facts/utils.py Including module_utils file ansible/module_utils/facts/virtual/base.py Including module_utils file ansible/module_utils/facts/virtual/__init__.py Including module_utils file ansible/module_utils/facts/virtual/dragonfly.py Including module_utils file ansible/module_utils/facts/virtual/freebsd.py Including module_utils file ansible/module_utils/facts/virtual/hpux.py Including module_utils file ansible/module_utils/facts/virtual/linux.py Including module_utils file ansible/module_utils/facts/virtual/netbsd.py Including module_utils file ansible/module_utils/facts/virtual/openbsd.py Including module_utils file ansible/module_utils/facts/virtual/sunos.py Including module_utils file ansible/module_utils/facts/virtual/sysctl.py Including module_utils file ansible/module_utils/parsing/convert_bool.py Including module_utils file ansible/module_utils/parsing/__init__.py Including module_utils file ansible/module_utils/pycompat24.py Including module_utils file ansible/module_utils/six/__init__.py <localhost> Attempting python interpreter discovery <localhost> EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'python3.10'"'"'; command -v '"'"'python3.9'"'"'; command -v '"'"'python3.8'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'python2.6'"'"'; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0' <localhost> Python interpreter discovery fallback (unsupported platform for extended discovery: darwin) Using module file /usr/local/Cellar/ansible/5.0.1/libexec/lib/python3.10/site-packages/ansible/modules/setup.py <localhost> PUT /Users/afunix/.ansible/tmp/ansible-local-70877y8t3c9b_/tmp3he6rmv3 TO /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872/AnsiballZ_setup.py <localhost> EXEC /bin/sh -c 'chmod u+x /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872/ /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872/AnsiballZ_setup.py && sleep 0' <localhost> EXEC /bin/sh -c '/usr/local/bin/python3.9 /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872/AnsiballZ_setup.py && sleep 0' <localhost> EXEC /bin/sh -c 'rm -f -r /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872/ > /dev/null 2>&1 && sleep 0' [WARNING]: Platform darwin on host localhost is using the discovered Python interpreter at /usr/local/bin/python3.9, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.12/reference_appendices/interpreter_discovery.html for more information. ok: [localhost] META: ran handlers TASK [test : include_vars] ***************************************************************************************************************************************************************************************************************************************************** task path: /Users/afunix/tmp/bug with spaces/roles/test/tasks/main.yml:2 looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/files/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/tasks/files/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/tasks/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/files/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/files/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/with/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/files/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/tasks/files/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/tasks/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/files/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/files/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/spaces/roles/test/vars/test_vars.yml" fatal: [localhost]: FAILED! => { "msg": "No file was found when using first_found." } PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76651
https://github.com/ansible/ansible/pull/77141
12865139472f0a2fa95b94983dcedb4d57e93b10
74a204e6f144f3eabd6384bbb665b6afd69117c3
2022-01-05T00:58:49Z
python
2022-03-02T21:16:47Z
lib/ansible/plugins/lookup/first_found.py
# (c) 2013, seth vidal <[email protected]> red hat, inc # (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type DOCUMENTATION = """ name: first_found author: Seth Vidal (!UNKNOWN) <[email protected]> version_added: historical short_description: return first file found from list description: - This lookup checks a list of files and paths and returns the full path to the first combination found. - As all lookups, when fed relative paths it will try use the current task's location first and go up the chain to the containing locations of role / play / include and so on. - The list of files has precedence over the paths searched. For example, A task in a role has a 'file1' in the play's relative path, this will be used, 'file2' in role's relative path will not. - Either a list of files C(_terms) or a key C(files) with a list of files is required for this plugin to operate. notes: - This lookup can be used in 'dual mode', either passing a list of file names or a dictionary that has C(files) and C(paths). options: _terms: description: A list of file names. files: description: A list of file names. type: list default: [] paths: description: A list of paths in which to look for the files. type: list default: [] skip: type: boolean default: False description: - When C(True), return an empty list when no files are matched. - This is useful when used with C(with_first_found), as an empty list return to C(with_) calls causes the calling task to be skipped. - When used as a template via C(lookup) or C(query), setting I(skip=True) will *not* cause the task to skip. Tasks must handle the empty list return from the template. - When C(False) and C(lookup) or C(query) specifies I(errors='ignore') all errors (including no file found, but potentially others) return an empty string or an empty list respectively. - When C(True) and C(lookup) or C(query) specifies I(errors='ignore'), no file found will return an empty list and other potential errors return an empty string or empty list depending on the template call (in other words return values of C(lookup) v C(query)). """ EXAMPLES = """ - name: Set _found_file to the first existing file, raising an error if a file is not found ansible.builtin.set_fact: _found_file: "{{ lookup('ansible.builtin.first_found', findme) }}" vars: findme: - /path/to/foo.txt - bar.txt # will be looked in files/ dir relative to role and/or play - /path/to/biz.txt - name: Set _found_file to the first existing file, or an empty list if no files found ansible.builtin.set_fact: _found_file: "{{ lookup('ansible.builtin.first_found', files, paths=['/extra/path'], skip=True) }}" vars: files: - /path/to/foo.txt - /path/to/bar.txt - name: Include tasks only if one of the files exist, otherwise skip the task ansible.builtin.include_tasks: file: "{{ item }}" with_first_found: files: - path/tasks.yaml - path/other_tasks.yaml skip: True - name: Include tasks only if one of the files exists, otherwise skip ansible.builtin.include_tasks: '{{ tasks_file }}' when: tasks_file != "" vars: tasks_file: "{{ lookup('ansible.builtin.first_found', files=['tasks.yaml', 'other_tasks.yaml'], errors='ignore') }}" - name: | copy first existing file found to /some/file, looking in relative directories from where the task is defined and including any play objects that contain it ansible.builtin.copy: src: "{{ lookup('ansible.builtin.first_found', findme) }}" dest: /some/file vars: findme: - foo - "{{ inventory_hostname }}" - bar - name: same copy but specific paths ansible.builtin.copy: src: "{{ lookup('ansible.builtin.first_found', params) }}" dest: /some/file vars: params: files: - foo - "{{ inventory_hostname }}" - bar paths: - /tmp/production - /tmp/staging - name: INTERFACES | Create Ansible header for /etc/network/interfaces ansible.builtin.template: src: "{{ lookup('ansible.builtin.first_found', findme)}}" dest: "/etc/foo.conf" vars: findme: - "{{ ansible_virtualization_type }}_foo.conf" - "default_foo.conf" - name: read vars from first file found, use 'vars/' relative subdir ansible.builtin.include_vars: "{{lookup('ansible.builtin.first_found', params)}}" vars: params: files: - '{{ ansible_distribution }}.yml' - '{{ ansible_os_family }}.yml' - default.yml paths: - 'vars' """ RETURN = """ _raw: description: - path to file found type: list elements: path """ import os from jinja2.exceptions import UndefinedError from ansible.errors import AnsibleLookupError, AnsibleUndefinedVariable from ansible.module_utils.common._collections_compat import Mapping, Sequence from ansible.module_utils.six import string_types from ansible.plugins.lookup import LookupBase def _split_on(terms, spliters=','): # TODO: fix as it does not allow spaces in names termlist = [] if isinstance(terms, string_types): for spliter in spliters: terms = terms.replace(spliter, ' ') termlist = terms.split(' ') else: # added since options will already listify for t in terms: termlist.extend(_split_on(t, spliters)) return termlist class LookupModule(LookupBase): def _process_terms(self, terms, variables, kwargs): total_search = [] skip = False # can use a dict instead of list item to pass inline config for term in terms: if isinstance(term, Mapping): self.set_options(var_options=variables, direct=term) elif isinstance(term, string_types): self.set_options(var_options=variables, direct=kwargs) elif isinstance(term, Sequence): partial, skip = self._process_terms(term, variables, kwargs) total_search.extend(partial) continue else: raise AnsibleLookupError("Invalid term supplied, can handle string, mapping or list of strings but got: %s for %s" % (type(term), term)) files = self.get_option('files') paths = self.get_option('paths') # NOTE: this is used as 'global' but can be set many times?!?!? skip = self.get_option('skip') # magic extra spliting to create lists filelist = _split_on(files, ',;') pathlist = _split_on(paths, ',:;') # create search structure if pathlist: for path in pathlist: for fn in filelist: f = os.path.join(path, fn) total_search.append(f) elif filelist: # NOTE: this seems wrong, should be 'extend' as any option/entry can clobber all total_search = filelist else: total_search.append(term) return total_search, skip def run(self, terms, variables, **kwargs): total_search, skip = self._process_terms(terms, variables, kwargs) # NOTE: during refactor noticed that the 'using a dict' as term # is designed to only work with 'one' otherwise inconsistencies will appear. # see other notes below. # actually search subdir = getattr(self, '_subdir', 'files') path = None for fn in total_search: try: fn = self._templar.template(fn) except (AnsibleUndefinedVariable, UndefinedError): continue # get subdir if set by task executor, default to files otherwise path = self.find_file_in_search_path(variables, subdir, fn, ignore_missing=True) # exit if we find one! if path is not None: return [path] # if we get here, no file was found if skip: # NOTE: global skip wont matter, only last 'skip' value in dict term return [] raise AnsibleLookupError("No file was found when using first_found.")
closed
ansible/ansible
https://github.com/ansible/ansible
76,651
lookup first_found fails if search path contains space
### Summary lookup first_found fails when path is configured and contains spaces. Wasn't the issue with 2.9, started to occur with 2.12. ### Issue Type Bug Report ### Component Name first_found ### Ansible Version ```console $ ansible --version ansible [core 2.12.1] config file = None configured module search path = ['/Users/afunix/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/Cellar/ansible/5.0.1/libexec/lib/python3.10/site-packages/ansible ansible collection location = /Users/afunix/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible python version = 3.10.1 (main, Dec 6 2021, 22:25:40) [Clang 13.0.0 (clang-1300.0.29.3)] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment MacOS 12.1 Ubuntu 20.04 ### Steps to Reproduce hosts ```yaml localhost ansible_connection=local ``` playbook.yml ```yaml --- - hosts: all roles: - test ``` roles/test/tasks/main.yml ```yaml --- - include_vars: "{{lookup('first_found', params)}}" vars: params: files: - "test_vars.yml" paths: - "{{role_path}}/vars" ``` roles/test/vars/test_vars.yml ```yaml --- ``` ### Expected Results ansible-playbook executes successfully ### Actual Results ```console afunix@blake ~/tmp/bug $ pwd /Users/afunix/tmp/bug afunix@blake ~/tmp/bug $ ansible-playbook -Di hosts playbook.yml PLAY [all] ********************************************************************************************************************************************************************************************************************************************************************* TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************************************************* [WARNING]: Platform darwin on host localhost is using the discovered Python interpreter at /usr/local/bin/python3.9, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.12/reference_appendices/interpreter_discovery.html for more information. ok: [localhost] TASK [test : include_vars] ***************************************************************************************************************************************************************************************************************************************************** ok: [localhost] PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 afunix@blake ~/tmp/bug with spaces $ pwd /Users/afunix/tmp/bug with spaces afunix@blake ~/tmp/bug with spaces $ ansible-playbook -Di hosts playbook.yml PLAY [all] ********************************************************************************************************************************************************************************************************************************************************************* TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************************************************* [WARNING]: Platform darwin on host localhost is using the discovered Python interpreter at /usr/local/bin/python3.9, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.12/reference_appendices/interpreter_discovery.html for more information. ok: [localhost] TASK [test : include_vars] ***************************************************************************************************************************************************************************************************************************************************** fatal: [localhost]: FAILED! => {"msg": "No file was found when using first_found."} PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 afunix@blake ~/tmp/bug with spaces $ ansible-playbook -Di hosts playbook.yml -vvvvv ansible-playbook [core 2.12.1] config file = None configured module search path = ['/Users/afunix/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/Cellar/ansible/5.0.1/libexec/lib/python3.10/site-packages/ansible ansible collection location = /Users/afunix/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible-playbook python version = 3.10.1 (main, Dec 6 2021, 22:25:40) [Clang 13.0.0 (clang-1300.0.29.3)] jinja version = 3.0.3 libyaml = True No config file found; using defaults Reading vault password file: /Users/afunix/.vault setting up inventory plugins host_list declined parsing /Users/afunix/tmp/bug with spaces/hosts as it did not pass its verify_file() method script declined parsing /Users/afunix/tmp/bug with spaces/hosts as it did not pass its verify_file() method auto declined parsing /Users/afunix/tmp/bug with spaces/hosts as it did not pass its verify_file() method Set default localhost to localhost Parsed /Users/afunix/tmp/bug with spaces/hosts inventory source with ini plugin Loading callback plugin default of type stdout, v2.0 from /usr/local/Cellar/ansible/5.0.1/libexec/lib/python3.10/site-packages/ansible/plugins/callback/default.py Attempting to use 'default' callback. Skipping callback 'default', as we already have a stdout callback. Attempting to use 'junit' callback. Attempting to use 'minimal' callback. Skipping callback 'minimal', as we already have a stdout callback. Attempting to use 'oneline' callback. Skipping callback 'oneline', as we already have a stdout callback. Attempting to use 'tree' callback. PLAYBOOK: playbook.yml ********************************************************************************************************************************************************************************************************************************************************* Positional arguments: playbook.yml verbosity: 5 connection: smart timeout: 10 become_method: sudo tags: ('all',) diff: True inventory: ('/Users/afunix/tmp/bug with spaces/hosts',) forks: 5 1 plays in playbook.yml PLAY [all] ********************************************************************************************************************************************************************************************************************************************************************* TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************************************************* task path: /Users/afunix/tmp/bug with spaces/playbook.yml:2 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: afunix <localhost> EXEC /bin/sh -c 'echo ~afunix && sleep 0' <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/afunix/.ansible/tmp `"&& mkdir "` echo /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872 `" && echo ansible-tmp-1641344178.921433-70880-92695461947872="` echo /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872 `" ) && sleep 0' Including module_utils file ansible/__init__.py Including module_utils file ansible/module_utils/__init__.py Including module_utils file ansible/module_utils/_text.py Including module_utils file ansible/module_utils/basic.py Including module_utils file ansible/module_utils/common/_collections_compat.py Including module_utils file ansible/module_utils/common/__init__.py Including module_utils file ansible/module_utils/common/_json_compat.py Including module_utils file ansible/module_utils/common/_utils.py Including module_utils file ansible/module_utils/common/arg_spec.py Including module_utils file ansible/module_utils/common/file.py Including module_utils file ansible/module_utils/common/locale.py Including module_utils file ansible/module_utils/common/parameters.py Including module_utils file ansible/module_utils/common/collections.py Including module_utils file ansible/module_utils/common/process.py Including module_utils file ansible/module_utils/common/sys_info.py Including module_utils file ansible/module_utils/common/text/converters.py Including module_utils file ansible/module_utils/common/text/__init__.py Including module_utils file ansible/module_utils/common/text/formatters.py Including module_utils file ansible/module_utils/common/validation.py Including module_utils file ansible/module_utils/common/warnings.py Including module_utils file ansible/module_utils/compat/selectors.py Including module_utils file ansible/module_utils/compat/__init__.py Including module_utils file ansible/module_utils/compat/_selectors2.py Including module_utils file ansible/module_utils/compat/selinux.py Including module_utils file ansible/module_utils/distro/__init__.py Including module_utils file ansible/module_utils/distro/_distro.py Including module_utils file ansible/module_utils/errors.py Including module_utils file ansible/module_utils/facts/ansible_collector.py Including module_utils file ansible/module_utils/facts/__init__.py Including module_utils file ansible/module_utils/facts/collector.py Including module_utils file ansible/module_utils/facts/compat.py Including module_utils file ansible/module_utils/facts/default_collectors.py Including module_utils file ansible/module_utils/facts/hardware/aix.py Including module_utils file ansible/module_utils/facts/hardware/__init__.py Including module_utils file ansible/module_utils/facts/hardware/base.py Including module_utils file ansible/module_utils/facts/hardware/darwin.py Including module_utils file ansible/module_utils/facts/hardware/dragonfly.py Including module_utils file ansible/module_utils/facts/hardware/freebsd.py Including module_utils file ansible/module_utils/facts/hardware/hpux.py Including module_utils file ansible/module_utils/facts/hardware/hurd.py Including module_utils file ansible/module_utils/facts/hardware/linux.py Including module_utils file ansible/module_utils/facts/hardware/netbsd.py Including module_utils file ansible/module_utils/facts/hardware/openbsd.py Including module_utils file ansible/module_utils/facts/hardware/sunos.py Including module_utils file ansible/module_utils/facts/namespace.py Including module_utils file ansible/module_utils/facts/network/aix.py Including module_utils file ansible/module_utils/facts/network/__init__.py Including module_utils file ansible/module_utils/facts/network/base.py Including module_utils file ansible/module_utils/facts/network/darwin.py Including module_utils file ansible/module_utils/facts/network/dragonfly.py Including module_utils file ansible/module_utils/facts/network/fc_wwn.py Including module_utils file ansible/module_utils/facts/network/freebsd.py Including module_utils file ansible/module_utils/facts/network/generic_bsd.py Including module_utils file ansible/module_utils/facts/network/hpux.py Including module_utils file ansible/module_utils/facts/network/hurd.py Including module_utils file ansible/module_utils/facts/network/iscsi.py Including module_utils file ansible/module_utils/facts/network/linux.py Including module_utils file ansible/module_utils/facts/network/netbsd.py Including module_utils file ansible/module_utils/facts/network/nvme.py Including module_utils file ansible/module_utils/facts/network/openbsd.py Including module_utils file ansible/module_utils/facts/network/sunos.py Including module_utils file ansible/module_utils/facts/other/facter.py Including module_utils file ansible/module_utils/facts/other/__init__.py Including module_utils file ansible/module_utils/facts/other/ohai.py Including module_utils file ansible/module_utils/facts/sysctl.py Including module_utils file ansible/module_utils/facts/system/apparmor.py Including module_utils file ansible/module_utils/facts/system/__init__.py Including module_utils file ansible/module_utils/facts/system/caps.py Including module_utils file ansible/module_utils/facts/system/chroot.py Including module_utils file ansible/module_utils/facts/system/cmdline.py Including module_utils file ansible/module_utils/facts/system/date_time.py Including module_utils file ansible/module_utils/facts/system/distribution.py Including module_utils file ansible/module_utils/facts/system/dns.py Including module_utils file ansible/module_utils/facts/system/env.py Including module_utils file ansible/module_utils/facts/system/fips.py Including module_utils file ansible/module_utils/facts/system/local.py Including module_utils file ansible/module_utils/facts/system/lsb.py Including module_utils file ansible/module_utils/facts/system/pkg_mgr.py Including module_utils file ansible/module_utils/facts/system/platform.py Including module_utils file ansible/module_utils/facts/system/python.py Including module_utils file ansible/module_utils/facts/system/selinux.py Including module_utils file ansible/module_utils/facts/system/service_mgr.py Including module_utils file ansible/module_utils/compat/version.py Including module_utils file ansible/module_utils/facts/system/ssh_pub_keys.py Including module_utils file ansible/module_utils/facts/system/user.py Including module_utils file ansible/module_utils/facts/timeout.py Including module_utils file ansible/module_utils/facts/utils.py Including module_utils file ansible/module_utils/facts/virtual/base.py Including module_utils file ansible/module_utils/facts/virtual/__init__.py Including module_utils file ansible/module_utils/facts/virtual/dragonfly.py Including module_utils file ansible/module_utils/facts/virtual/freebsd.py Including module_utils file ansible/module_utils/facts/virtual/hpux.py Including module_utils file ansible/module_utils/facts/virtual/linux.py Including module_utils file ansible/module_utils/facts/virtual/netbsd.py Including module_utils file ansible/module_utils/facts/virtual/openbsd.py Including module_utils file ansible/module_utils/facts/virtual/sunos.py Including module_utils file ansible/module_utils/facts/virtual/sysctl.py Including module_utils file ansible/module_utils/parsing/convert_bool.py Including module_utils file ansible/module_utils/parsing/__init__.py Including module_utils file ansible/module_utils/pycompat24.py Including module_utils file ansible/module_utils/six/__init__.py <localhost> Attempting python interpreter discovery <localhost> EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'python3.10'"'"'; command -v '"'"'python3.9'"'"'; command -v '"'"'python3.8'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'python2.6'"'"'; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0' <localhost> Python interpreter discovery fallback (unsupported platform for extended discovery: darwin) Using module file /usr/local/Cellar/ansible/5.0.1/libexec/lib/python3.10/site-packages/ansible/modules/setup.py <localhost> PUT /Users/afunix/.ansible/tmp/ansible-local-70877y8t3c9b_/tmp3he6rmv3 TO /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872/AnsiballZ_setup.py <localhost> EXEC /bin/sh -c 'chmod u+x /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872/ /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872/AnsiballZ_setup.py && sleep 0' <localhost> EXEC /bin/sh -c '/usr/local/bin/python3.9 /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872/AnsiballZ_setup.py && sleep 0' <localhost> EXEC /bin/sh -c 'rm -f -r /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872/ > /dev/null 2>&1 && sleep 0' [WARNING]: Platform darwin on host localhost is using the discovered Python interpreter at /usr/local/bin/python3.9, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.12/reference_appendices/interpreter_discovery.html for more information. ok: [localhost] META: ran handlers TASK [test : include_vars] ***************************************************************************************************************************************************************************************************************************************************** task path: /Users/afunix/tmp/bug with spaces/roles/test/tasks/main.yml:2 looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/files/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/tasks/files/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/tasks/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/files/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/files/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/with/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/files/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/tasks/files/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/tasks/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/files/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/files/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/spaces/roles/test/vars/test_vars.yml" fatal: [localhost]: FAILED! => { "msg": "No file was found when using first_found." } PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76651
https://github.com/ansible/ansible/pull/77141
12865139472f0a2fa95b94983dcedb4d57e93b10
74a204e6f144f3eabd6384bbb665b6afd69117c3
2022-01-05T00:58:49Z
python
2022-03-02T21:16:47Z
test/integration/targets/lookup_first_found/files/vars file spaces.yml
closed
ansible/ansible
https://github.com/ansible/ansible
76,651
lookup first_found fails if search path contains space
### Summary lookup first_found fails when path is configured and contains spaces. Wasn't the issue with 2.9, started to occur with 2.12. ### Issue Type Bug Report ### Component Name first_found ### Ansible Version ```console $ ansible --version ansible [core 2.12.1] config file = None configured module search path = ['/Users/afunix/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/Cellar/ansible/5.0.1/libexec/lib/python3.10/site-packages/ansible ansible collection location = /Users/afunix/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible python version = 3.10.1 (main, Dec 6 2021, 22:25:40) [Clang 13.0.0 (clang-1300.0.29.3)] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment MacOS 12.1 Ubuntu 20.04 ### Steps to Reproduce hosts ```yaml localhost ansible_connection=local ``` playbook.yml ```yaml --- - hosts: all roles: - test ``` roles/test/tasks/main.yml ```yaml --- - include_vars: "{{lookup('first_found', params)}}" vars: params: files: - "test_vars.yml" paths: - "{{role_path}}/vars" ``` roles/test/vars/test_vars.yml ```yaml --- ``` ### Expected Results ansible-playbook executes successfully ### Actual Results ```console afunix@blake ~/tmp/bug $ pwd /Users/afunix/tmp/bug afunix@blake ~/tmp/bug $ ansible-playbook -Di hosts playbook.yml PLAY [all] ********************************************************************************************************************************************************************************************************************************************************************* TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************************************************* [WARNING]: Platform darwin on host localhost is using the discovered Python interpreter at /usr/local/bin/python3.9, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.12/reference_appendices/interpreter_discovery.html for more information. ok: [localhost] TASK [test : include_vars] ***************************************************************************************************************************************************************************************************************************************************** ok: [localhost] PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 afunix@blake ~/tmp/bug with spaces $ pwd /Users/afunix/tmp/bug with spaces afunix@blake ~/tmp/bug with spaces $ ansible-playbook -Di hosts playbook.yml PLAY [all] ********************************************************************************************************************************************************************************************************************************************************************* TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************************************************* [WARNING]: Platform darwin on host localhost is using the discovered Python interpreter at /usr/local/bin/python3.9, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.12/reference_appendices/interpreter_discovery.html for more information. ok: [localhost] TASK [test : include_vars] ***************************************************************************************************************************************************************************************************************************************************** fatal: [localhost]: FAILED! => {"msg": "No file was found when using first_found."} PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 afunix@blake ~/tmp/bug with spaces $ ansible-playbook -Di hosts playbook.yml -vvvvv ansible-playbook [core 2.12.1] config file = None configured module search path = ['/Users/afunix/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/Cellar/ansible/5.0.1/libexec/lib/python3.10/site-packages/ansible ansible collection location = /Users/afunix/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible-playbook python version = 3.10.1 (main, Dec 6 2021, 22:25:40) [Clang 13.0.0 (clang-1300.0.29.3)] jinja version = 3.0.3 libyaml = True No config file found; using defaults Reading vault password file: /Users/afunix/.vault setting up inventory plugins host_list declined parsing /Users/afunix/tmp/bug with spaces/hosts as it did not pass its verify_file() method script declined parsing /Users/afunix/tmp/bug with spaces/hosts as it did not pass its verify_file() method auto declined parsing /Users/afunix/tmp/bug with spaces/hosts as it did not pass its verify_file() method Set default localhost to localhost Parsed /Users/afunix/tmp/bug with spaces/hosts inventory source with ini plugin Loading callback plugin default of type stdout, v2.0 from /usr/local/Cellar/ansible/5.0.1/libexec/lib/python3.10/site-packages/ansible/plugins/callback/default.py Attempting to use 'default' callback. Skipping callback 'default', as we already have a stdout callback. Attempting to use 'junit' callback. Attempting to use 'minimal' callback. Skipping callback 'minimal', as we already have a stdout callback. Attempting to use 'oneline' callback. Skipping callback 'oneline', as we already have a stdout callback. Attempting to use 'tree' callback. PLAYBOOK: playbook.yml ********************************************************************************************************************************************************************************************************************************************************* Positional arguments: playbook.yml verbosity: 5 connection: smart timeout: 10 become_method: sudo tags: ('all',) diff: True inventory: ('/Users/afunix/tmp/bug with spaces/hosts',) forks: 5 1 plays in playbook.yml PLAY [all] ********************************************************************************************************************************************************************************************************************************************************************* TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************************************************* task path: /Users/afunix/tmp/bug with spaces/playbook.yml:2 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: afunix <localhost> EXEC /bin/sh -c 'echo ~afunix && sleep 0' <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/afunix/.ansible/tmp `"&& mkdir "` echo /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872 `" && echo ansible-tmp-1641344178.921433-70880-92695461947872="` echo /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872 `" ) && sleep 0' Including module_utils file ansible/__init__.py Including module_utils file ansible/module_utils/__init__.py Including module_utils file ansible/module_utils/_text.py Including module_utils file ansible/module_utils/basic.py Including module_utils file ansible/module_utils/common/_collections_compat.py Including module_utils file ansible/module_utils/common/__init__.py Including module_utils file ansible/module_utils/common/_json_compat.py Including module_utils file ansible/module_utils/common/_utils.py Including module_utils file ansible/module_utils/common/arg_spec.py Including module_utils file ansible/module_utils/common/file.py Including module_utils file ansible/module_utils/common/locale.py Including module_utils file ansible/module_utils/common/parameters.py Including module_utils file ansible/module_utils/common/collections.py Including module_utils file ansible/module_utils/common/process.py Including module_utils file ansible/module_utils/common/sys_info.py Including module_utils file ansible/module_utils/common/text/converters.py Including module_utils file ansible/module_utils/common/text/__init__.py Including module_utils file ansible/module_utils/common/text/formatters.py Including module_utils file ansible/module_utils/common/validation.py Including module_utils file ansible/module_utils/common/warnings.py Including module_utils file ansible/module_utils/compat/selectors.py Including module_utils file ansible/module_utils/compat/__init__.py Including module_utils file ansible/module_utils/compat/_selectors2.py Including module_utils file ansible/module_utils/compat/selinux.py Including module_utils file ansible/module_utils/distro/__init__.py Including module_utils file ansible/module_utils/distro/_distro.py Including module_utils file ansible/module_utils/errors.py Including module_utils file ansible/module_utils/facts/ansible_collector.py Including module_utils file ansible/module_utils/facts/__init__.py Including module_utils file ansible/module_utils/facts/collector.py Including module_utils file ansible/module_utils/facts/compat.py Including module_utils file ansible/module_utils/facts/default_collectors.py Including module_utils file ansible/module_utils/facts/hardware/aix.py Including module_utils file ansible/module_utils/facts/hardware/__init__.py Including module_utils file ansible/module_utils/facts/hardware/base.py Including module_utils file ansible/module_utils/facts/hardware/darwin.py Including module_utils file ansible/module_utils/facts/hardware/dragonfly.py Including module_utils file ansible/module_utils/facts/hardware/freebsd.py Including module_utils file ansible/module_utils/facts/hardware/hpux.py Including module_utils file ansible/module_utils/facts/hardware/hurd.py Including module_utils file ansible/module_utils/facts/hardware/linux.py Including module_utils file ansible/module_utils/facts/hardware/netbsd.py Including module_utils file ansible/module_utils/facts/hardware/openbsd.py Including module_utils file ansible/module_utils/facts/hardware/sunos.py Including module_utils file ansible/module_utils/facts/namespace.py Including module_utils file ansible/module_utils/facts/network/aix.py Including module_utils file ansible/module_utils/facts/network/__init__.py Including module_utils file ansible/module_utils/facts/network/base.py Including module_utils file ansible/module_utils/facts/network/darwin.py Including module_utils file ansible/module_utils/facts/network/dragonfly.py Including module_utils file ansible/module_utils/facts/network/fc_wwn.py Including module_utils file ansible/module_utils/facts/network/freebsd.py Including module_utils file ansible/module_utils/facts/network/generic_bsd.py Including module_utils file ansible/module_utils/facts/network/hpux.py Including module_utils file ansible/module_utils/facts/network/hurd.py Including module_utils file ansible/module_utils/facts/network/iscsi.py Including module_utils file ansible/module_utils/facts/network/linux.py Including module_utils file ansible/module_utils/facts/network/netbsd.py Including module_utils file ansible/module_utils/facts/network/nvme.py Including module_utils file ansible/module_utils/facts/network/openbsd.py Including module_utils file ansible/module_utils/facts/network/sunos.py Including module_utils file ansible/module_utils/facts/other/facter.py Including module_utils file ansible/module_utils/facts/other/__init__.py Including module_utils file ansible/module_utils/facts/other/ohai.py Including module_utils file ansible/module_utils/facts/sysctl.py Including module_utils file ansible/module_utils/facts/system/apparmor.py Including module_utils file ansible/module_utils/facts/system/__init__.py Including module_utils file ansible/module_utils/facts/system/caps.py Including module_utils file ansible/module_utils/facts/system/chroot.py Including module_utils file ansible/module_utils/facts/system/cmdline.py Including module_utils file ansible/module_utils/facts/system/date_time.py Including module_utils file ansible/module_utils/facts/system/distribution.py Including module_utils file ansible/module_utils/facts/system/dns.py Including module_utils file ansible/module_utils/facts/system/env.py Including module_utils file ansible/module_utils/facts/system/fips.py Including module_utils file ansible/module_utils/facts/system/local.py Including module_utils file ansible/module_utils/facts/system/lsb.py Including module_utils file ansible/module_utils/facts/system/pkg_mgr.py Including module_utils file ansible/module_utils/facts/system/platform.py Including module_utils file ansible/module_utils/facts/system/python.py Including module_utils file ansible/module_utils/facts/system/selinux.py Including module_utils file ansible/module_utils/facts/system/service_mgr.py Including module_utils file ansible/module_utils/compat/version.py Including module_utils file ansible/module_utils/facts/system/ssh_pub_keys.py Including module_utils file ansible/module_utils/facts/system/user.py Including module_utils file ansible/module_utils/facts/timeout.py Including module_utils file ansible/module_utils/facts/utils.py Including module_utils file ansible/module_utils/facts/virtual/base.py Including module_utils file ansible/module_utils/facts/virtual/__init__.py Including module_utils file ansible/module_utils/facts/virtual/dragonfly.py Including module_utils file ansible/module_utils/facts/virtual/freebsd.py Including module_utils file ansible/module_utils/facts/virtual/hpux.py Including module_utils file ansible/module_utils/facts/virtual/linux.py Including module_utils file ansible/module_utils/facts/virtual/netbsd.py Including module_utils file ansible/module_utils/facts/virtual/openbsd.py Including module_utils file ansible/module_utils/facts/virtual/sunos.py Including module_utils file ansible/module_utils/facts/virtual/sysctl.py Including module_utils file ansible/module_utils/parsing/convert_bool.py Including module_utils file ansible/module_utils/parsing/__init__.py Including module_utils file ansible/module_utils/pycompat24.py Including module_utils file ansible/module_utils/six/__init__.py <localhost> Attempting python interpreter discovery <localhost> EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'python3.10'"'"'; command -v '"'"'python3.9'"'"'; command -v '"'"'python3.8'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'python2.6'"'"'; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0' <localhost> Python interpreter discovery fallback (unsupported platform for extended discovery: darwin) Using module file /usr/local/Cellar/ansible/5.0.1/libexec/lib/python3.10/site-packages/ansible/modules/setup.py <localhost> PUT /Users/afunix/.ansible/tmp/ansible-local-70877y8t3c9b_/tmp3he6rmv3 TO /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872/AnsiballZ_setup.py <localhost> EXEC /bin/sh -c 'chmod u+x /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872/ /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872/AnsiballZ_setup.py && sleep 0' <localhost> EXEC /bin/sh -c '/usr/local/bin/python3.9 /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872/AnsiballZ_setup.py && sleep 0' <localhost> EXEC /bin/sh -c 'rm -f -r /Users/afunix/.ansible/tmp/ansible-tmp-1641344178.921433-70880-92695461947872/ > /dev/null 2>&1 && sleep 0' [WARNING]: Platform darwin on host localhost is using the discovered Python interpreter at /usr/local/bin/python3.9, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.12/reference_appendices/interpreter_discovery.html for more information. ok: [localhost] META: ran handlers TASK [test : include_vars] ***************************************************************************************************************************************************************************************************************************************************** task path: /Users/afunix/tmp/bug with spaces/roles/test/tasks/main.yml:2 looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/files/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/tasks/files/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/tasks/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/files/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/files/with/test_vars.yml" looking for "with/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/with/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/files/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/tasks/files/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/roles/test/tasks/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/files/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/files/spaces/roles/test/vars/test_vars.yml" looking for "spaces/roles/test/vars/test_vars.yml" at "/Users/afunix/tmp/bug with spaces/spaces/roles/test/vars/test_vars.yml" fatal: [localhost]: FAILED! => { "msg": "No file was found when using first_found." } PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76651
https://github.com/ansible/ansible/pull/77141
12865139472f0a2fa95b94983dcedb4d57e93b10
74a204e6f144f3eabd6384bbb665b6afd69117c3
2022-01-05T00:58:49Z
python
2022-03-02T21:16:47Z
test/integration/targets/lookup_first_found/tasks/main.yml
- name: test with_first_found set_fact: "first_found={{ item }}" with_first_found: - "does_not_exist" - "foo1" - "{{ role_path + '/files/bar1' }}" # will only hit this if dwim search is broken - name: set expected set_fact: first_expected="{{ role_path + '/files/foo1' }}" - name: set unexpected set_fact: first_unexpected="{{ role_path + '/files/bar1' }}" - name: verify with_first_found results assert: that: - "first_found == first_expected" - "first_found != first_unexpected" - name: test q(first_found) with no files produces empty list set_fact: first_found_var: "{{ q('first_found', params, errors='ignore') }}" vars: params: files: "not_a_file.yaml" skip: True - name: verify q(first_found) result assert: that: - "first_found_var == []" - name: test lookup(first_found) with no files produces empty string set_fact: first_found_var: "{{ lookup('first_found', params, errors='ignore') }}" vars: params: files: "not_a_file.yaml" - name: verify lookup(first_found) result assert: that: - "first_found_var == ''" # NOTE: skip: True deprecated e17a2b502d6601be53c60d7ba1c627df419460c9, remove 2.12 - name: test first_found with no matches and skip=True does nothing set_fact: "this_not_set={{ item }}" vars: params: files: - not/a/file.yaml - another/non/file.yaml skip: True loop: "{{ q('first_found', params) }}" - name: verify skip assert: that: - "this_not_set is not defined" - name: test first_found with no matches and errors='ignore' skips in a loop set_fact: "this_not_set={{ item }}" vars: params: files: - not/a/file.yaml - another/non/file.yaml loop: "{{ query('first_found', params, errors='ignore') }}" - name: verify errors=ignore assert: that: - "this_not_set is not defined" - name: test legacy formats set_fact: hatethisformat={{item}} vars: params: files: not/a/file.yaml;hosts paths: not/a/path:/etc loop: "{{ q('first_found', params) }}" - name: verify /etc/hosts was found assert: that: - "hatethisformat == '/etc/hosts'"
closed
ansible/ansible
https://github.com/ansible/ansible
77,192
moving ipaddr from netcommon to utils breaks the non-namespaced usage of nthhost (without the collection)
### Summary https://github.com/ansible-collections/ansible.netcommon/pull/359 This change breaks the non-namespaced usage of ipaddr filter (without the collection). example ipwrap ### Issue Type Bug Report ### Component Name ansible ### Ansible Version ```console $ ansible --version code and can become unstable at any point. ansible [core 2.13.0.dev0] (bugfix_redirection de11a1ce78) last updated 2022/03/03 13:38:18 (GMT +550) config file = /Users/amhatre/ansible-collections/playbooks/ansible.cfg configured module search path = ['/Users/amhatre/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/amhatre/dev-workspace/ansible/lib/ansible ansible collection location = /Users/amhatre/ansible-collections/collections executable location = /Users/amhatre/dev-workspace/ansible/bin/ansible python version = 3.8.5 (default, Jan 7 2021, 17:04:44) [Clang 12.0.0 (clang-1200.0.32.28)] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console $ (ansible_from_source1) amhatre@ashwinis-MacBook-Pro playbooks % ansible-config dump --only-changed [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. COLLECTIONS_PATHS(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = ['/Users/amhatre/ansible-collection DEFAULT_HOST_LIST(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = ['/Users/amhatre/ansible-collection HOST_KEY_CHECKING(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = False INTERPRETER_PYTHON(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = /Users/amhatre/ansible_venvs/py3.8 PARAMIKO_LOOK_FOR_KEYS(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = False PERSISTENT_COMMAND_TIMEOUT(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = 100 PERSISTENT_CONNECT_RETRY_TIMEOUT(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = 100 PERSISTENT_CONNECT_TIMEOUT(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = 100 (END) ``` ### OS / Environment mac os ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) - name: Some configuration files require IPv6 addresses to be “wrapped” in square brackets ([ ]) This filter is used to wrap ipv6 address. hosts: localhost connection: ansible.netcommon.network_cli gather_facts: no tasks: - name: Input for IPVwrap plugin ansible.builtin.set_fact: value: - 192.24.2.1 - host.fqdn - ::1 - '' - 192.168.32.0/24 - fe80::100/10 - 42540766412265424405338506004571095040/64 - True - debug: msg: "{{ value|ipwrap }}" ``` ### Expected Results (py3.6.10) amhatre@ashwinis-MacBook-Pro playbooks % ansible-playbook test_ipwrap.yaml PLAY [Some configuration files require IPv6 addresses to be “wrapped” in square brackets ([ ]) This filter is used to wrap ipv6 address.] *** TASK [Input for IPVwrap plugin] ********************************************************************************* ok: [localhost] TASK [debug] **************************************************************************************************** [WARNING]: The value '' is not a valid IP address or network, passing this value to ipaddr filter might result in breaking change in future. [WARNING]: The value 'True' is not a valid IP address or network, passing this value to ipaddr filter might result in breaking change in future. ok: [localhost] => { "msg": [ "192.24.2.1", "host.fqdn", "[::1]", "", "192.168.32.0/24", "[fe80::100]/10", "[2001:db8:32c:faad::]/64", true ] } PLAY RECAP ****************************************************************************************************** localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 (py3.6.10) amhatre@ashwinis-MacBook-Pro playbooks % ### Actual Results ```console (py3.6.10) amhatre@ashwinis-MacBook-Pro playbooks % ansible-playbook test_ipwrap.yaml PLAY [Some configuration files require IPv6 addresses to be “wrapped” in square brackets ([ ]) This filter is used to wrap ipv6 address.] ************************ TASK [Input for IPVwrap plugin] ********************************************************************************************************************************** ok: [localhost] TASK [debug] ***************************************************************************************************************************************************** fatal: [localhost]: FAILED! => {"msg": "template error while templating string: No filter named 'ipwrap'.. String: {{ value|ipwrap }}"} PLAY RECAP ******************************************************************************************************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 (py3.6.10) amhatre@ashwinis-MacBook-Pro playbooks % ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77192
https://github.com/ansible/ansible/pull/77210
50d28de9ba0d7271b966b3888916195cb9d28965
8063643b4cec51a72377da5f3fa354d3ff9e737a
2022-03-03T08:10:08Z
python
2022-03-07T20:39:56Z
changelogs/fragments/77210-fix-collection-filter-test-redirects.yml
closed
ansible/ansible
https://github.com/ansible/ansible
77,192
moving ipaddr from netcommon to utils breaks the non-namespaced usage of nthhost (without the collection)
### Summary https://github.com/ansible-collections/ansible.netcommon/pull/359 This change breaks the non-namespaced usage of ipaddr filter (without the collection). example ipwrap ### Issue Type Bug Report ### Component Name ansible ### Ansible Version ```console $ ansible --version code and can become unstable at any point. ansible [core 2.13.0.dev0] (bugfix_redirection de11a1ce78) last updated 2022/03/03 13:38:18 (GMT +550) config file = /Users/amhatre/ansible-collections/playbooks/ansible.cfg configured module search path = ['/Users/amhatre/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/amhatre/dev-workspace/ansible/lib/ansible ansible collection location = /Users/amhatre/ansible-collections/collections executable location = /Users/amhatre/dev-workspace/ansible/bin/ansible python version = 3.8.5 (default, Jan 7 2021, 17:04:44) [Clang 12.0.0 (clang-1200.0.32.28)] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console $ (ansible_from_source1) amhatre@ashwinis-MacBook-Pro playbooks % ansible-config dump --only-changed [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. COLLECTIONS_PATHS(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = ['/Users/amhatre/ansible-collection DEFAULT_HOST_LIST(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = ['/Users/amhatre/ansible-collection HOST_KEY_CHECKING(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = False INTERPRETER_PYTHON(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = /Users/amhatre/ansible_venvs/py3.8 PARAMIKO_LOOK_FOR_KEYS(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = False PERSISTENT_COMMAND_TIMEOUT(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = 100 PERSISTENT_CONNECT_RETRY_TIMEOUT(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = 100 PERSISTENT_CONNECT_TIMEOUT(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = 100 (END) ``` ### OS / Environment mac os ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) - name: Some configuration files require IPv6 addresses to be “wrapped” in square brackets ([ ]) This filter is used to wrap ipv6 address. hosts: localhost connection: ansible.netcommon.network_cli gather_facts: no tasks: - name: Input for IPVwrap plugin ansible.builtin.set_fact: value: - 192.24.2.1 - host.fqdn - ::1 - '' - 192.168.32.0/24 - fe80::100/10 - 42540766412265424405338506004571095040/64 - True - debug: msg: "{{ value|ipwrap }}" ``` ### Expected Results (py3.6.10) amhatre@ashwinis-MacBook-Pro playbooks % ansible-playbook test_ipwrap.yaml PLAY [Some configuration files require IPv6 addresses to be “wrapped” in square brackets ([ ]) This filter is used to wrap ipv6 address.] *** TASK [Input for IPVwrap plugin] ********************************************************************************* ok: [localhost] TASK [debug] **************************************************************************************************** [WARNING]: The value '' is not a valid IP address or network, passing this value to ipaddr filter might result in breaking change in future. [WARNING]: The value 'True' is not a valid IP address or network, passing this value to ipaddr filter might result in breaking change in future. ok: [localhost] => { "msg": [ "192.24.2.1", "host.fqdn", "[::1]", "", "192.168.32.0/24", "[fe80::100]/10", "[2001:db8:32c:faad::]/64", true ] } PLAY RECAP ****************************************************************************************************** localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 (py3.6.10) amhatre@ashwinis-MacBook-Pro playbooks % ### Actual Results ```console (py3.6.10) amhatre@ashwinis-MacBook-Pro playbooks % ansible-playbook test_ipwrap.yaml PLAY [Some configuration files require IPv6 addresses to be “wrapped” in square brackets ([ ]) This filter is used to wrap ipv6 address.] ************************ TASK [Input for IPVwrap plugin] ********************************************************************************************************************************** ok: [localhost] TASK [debug] ***************************************************************************************************************************************************** fatal: [localhost]: FAILED! => {"msg": "template error while templating string: No filter named 'ipwrap'.. String: {{ value|ipwrap }}"} PLAY RECAP ******************************************************************************************************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 (py3.6.10) amhatre@ashwinis-MacBook-Pro playbooks % ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77192
https://github.com/ansible/ansible/pull/77210
50d28de9ba0d7271b966b3888916195cb9d28965
8063643b4cec51a72377da5f3fa354d3ff9e737a
2022-03-03T08:10:08Z
python
2022-03-07T20:39:56Z
lib/ansible/template/__init__.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import ast import datetime import os import pkgutil import pwd import re import time from contextlib import contextmanager from hashlib import sha1 from numbers import Number from traceback import format_exc from jinja2.exceptions import TemplateSyntaxError, UndefinedError from jinja2.loaders import FileSystemLoader from jinja2.nativetypes import NativeEnvironment from jinja2.runtime import Context, StrictUndefined from ansible import constants as C from ansible.errors import ( AnsibleAssertionError, AnsibleError, AnsibleFilterError, AnsibleLookupError, AnsibleOptionsError, AnsiblePluginRemovedError, AnsibleUndefinedVariable, ) from ansible.module_utils.six import string_types, text_type from ansible.module_utils._text import to_native, to_text, to_bytes from ansible.module_utils.common._collections_compat import Iterator, Sequence, Mapping, MappingView, MutableMapping from ansible.module_utils.common.collections import is_sequence from ansible.module_utils.compat.importlib import import_module from ansible.plugins.loader import filter_loader, lookup_loader, test_loader from ansible.template.native_helpers import ansible_native_concat, ansible_eval_concat, ansible_concat from ansible.template.template import AnsibleJ2Template from ansible.template.vars import AnsibleJ2Vars from ansible.utils.collection_loader import AnsibleCollectionRef from ansible.utils.display import Display from ansible.utils.collection_loader._collection_finder import _get_collection_metadata from ansible.utils.listify import listify_lookup_plugin_terms from ansible.utils.native_jinja import NativeJinjaText from ansible.utils.unsafe_proxy import wrap_var display = Display() __all__ = ['Templar', 'generate_ansible_template_vars'] # Primitive Types which we don't want Jinja to convert to strings. NON_TEMPLATED_TYPES = (bool, Number) JINJA2_OVERRIDE = '#jinja2:' JINJA2_BEGIN_TOKENS = frozenset(('variable_begin', 'block_begin', 'comment_begin', 'raw_begin')) JINJA2_END_TOKENS = frozenset(('variable_end', 'block_end', 'comment_end', 'raw_end')) RANGE_TYPE = type(range(0)) def generate_ansible_template_vars(path, fullpath=None, dest_path=None): if fullpath is None: b_path = to_bytes(path) else: b_path = to_bytes(fullpath) try: template_uid = pwd.getpwuid(os.stat(b_path).st_uid).pw_name except (KeyError, TypeError): template_uid = os.stat(b_path).st_uid temp_vars = { 'template_host': to_text(os.uname()[1]), 'template_path': path, 'template_mtime': datetime.datetime.fromtimestamp(os.path.getmtime(b_path)), 'template_uid': to_text(template_uid), 'template_run_date': datetime.datetime.now(), 'template_destpath': to_native(dest_path) if dest_path else None, } if fullpath is None: temp_vars['template_fullpath'] = os.path.abspath(path) else: temp_vars['template_fullpath'] = fullpath managed_default = C.DEFAULT_MANAGED_STR managed_str = managed_default.format( host=temp_vars['template_host'], uid=temp_vars['template_uid'], file=temp_vars['template_path'], ) temp_vars['ansible_managed'] = to_text(time.strftime(to_native(managed_str), time.localtime(os.path.getmtime(b_path)))) return temp_vars def _escape_backslashes(data, jinja_env): """Double backslashes within jinja2 expressions A user may enter something like this in a playbook:: debug: msg: "Test Case 1\\3; {{ test1_name | regex_replace('^(.*)_name$', '\\1')}}" The string inside of the {{ gets interpreted multiple times First by yaml. Then by python. And finally by jinja2 as part of it's variable. Because it is processed by both python and jinja2, the backslash escaped characters get unescaped twice. This means that we'd normally have to use four backslashes to escape that. This is painful for playbook authors as they have to remember different rules for inside vs outside of a jinja2 expression (The backslashes outside of the "{{ }}" only get processed by yaml and python. So they only need to be escaped once). The following code fixes this by automatically performing the extra quoting of backslashes inside of a jinja2 expression. """ if '\\' in data and '{{' in data: new_data = [] d2 = jinja_env.preprocess(data) in_var = False for token in jinja_env.lex(d2): if token[1] == 'variable_begin': in_var = True new_data.append(token[2]) elif token[1] == 'variable_end': in_var = False new_data.append(token[2]) elif in_var and token[1] == 'string': # Double backslashes only if we're inside of a jinja2 variable new_data.append(token[2].replace('\\', '\\\\')) else: new_data.append(token[2]) data = ''.join(new_data) return data def is_possibly_template(data, jinja_env): """Determines if a string looks like a template, by seeing if it contains a jinja2 start delimiter. Does not guarantee that the string is actually a template. This is different than ``is_template`` which is more strict. This method may return ``True`` on a string that is not templatable. Useful when guarding passing a string for templating, but when you want to allow the templating engine to make the final assessment which may result in ``TemplateSyntaxError``. """ if isinstance(data, string_types): for marker in (jinja_env.block_start_string, jinja_env.variable_start_string, jinja_env.comment_start_string): if marker in data: return True return False def is_template(data, jinja_env): """This function attempts to quickly detect whether a value is a jinja2 template. To do so, we look for the first 2 matching jinja2 tokens for start and end delimiters. """ found = None start = True comment = False d2 = jinja_env.preprocess(data) # Quick check to see if this is remotely like a template before doing # more expensive investigation. if not is_possibly_template(d2, jinja_env): return False # This wraps a lot of code, but this is due to lex returning a generator # so we may get an exception at any part of the loop try: for token in jinja_env.lex(d2): if token[1] in JINJA2_BEGIN_TOKENS: if start and token[1] == 'comment_begin': # Comments can wrap other token types comment = True start = False # Example: variable_end -> variable found = token[1].split('_')[0] elif token[1] in JINJA2_END_TOKENS: if token[1].split('_')[0] == found: return True elif comment: continue return False except TemplateSyntaxError: return False return False def _count_newlines_from_end(in_str): ''' Counts the number of newlines at the end of a string. This is used during the jinja2 templating to ensure the count matches the input, since some newlines may be thrown away during the templating. ''' try: i = len(in_str) j = i - 1 while in_str[j] == '\n': j -= 1 return i - 1 - j except IndexError: # Uncommon cases: zero length string and string containing only newlines return i def recursive_check_defined(item): from jinja2.runtime import Undefined if isinstance(item, MutableMapping): for key in item: recursive_check_defined(item[key]) elif isinstance(item, list): for i in item: recursive_check_defined(i) else: if isinstance(item, Undefined): raise AnsibleFilterError("{0} is undefined".format(item)) def _is_rolled(value): """Helper method to determine if something is an unrolled generator, iterator, or similar object """ return ( isinstance(value, Iterator) or isinstance(value, MappingView) or isinstance(value, RANGE_TYPE) ) def _unroll_iterator(func): """Wrapper function, that intercepts the result of a templating and auto unrolls a generator, so that users are not required to explicitly use ``|list`` to unroll. """ def wrapper(*args, **kwargs): ret = func(*args, **kwargs) if _is_rolled(ret): return list(ret) return ret return _update_wrapper(wrapper, func) def _update_wrapper(wrapper, func): # This code is duplicated from ``functools.update_wrapper`` from Py3.7. # ``functools.update_wrapper`` was failing when the func was ``functools.partial`` for attr in ('__module__', '__name__', '__qualname__', '__doc__', '__annotations__'): try: value = getattr(func, attr) except AttributeError: pass else: setattr(wrapper, attr, value) for attr in ('__dict__',): getattr(wrapper, attr).update(getattr(func, attr, {})) wrapper.__wrapped__ = func return wrapper def _wrap_native_text(func): """Wrapper function, that intercepts the result of a filter and wraps it into NativeJinjaText which is then used in ``ansible_native_concat`` to indicate that it is a text which should not be passed into ``literal_eval``. """ def wrapper(*args, **kwargs): ret = func(*args, **kwargs) return NativeJinjaText(ret) return _update_wrapper(wrapper, func) class AnsibleUndefined(StrictUndefined): ''' A custom Undefined class, which returns further Undefined objects on access, rather than throwing an exception. ''' def __getattr__(self, name): if name == '__UNSAFE__': # AnsibleUndefined should never be assumed to be unsafe # This prevents ``hasattr(val, '__UNSAFE__')`` from evaluating to ``True`` raise AttributeError(name) # Return original Undefined object to preserve the first failure context return self def __getitem__(self, key): # Return original Undefined object to preserve the first failure context return self def __repr__(self): return 'AnsibleUndefined(hint={0!r}, obj={1!r}, name={2!r})'.format( self._undefined_hint, self._undefined_obj, self._undefined_name ) def __contains__(self, item): # Return original Undefined object to preserve the first failure context return self class AnsibleContext(Context): ''' A custom context, which intercepts resolve() calls and sets a flag internally if any variable lookup returns an AnsibleUnsafe value. This flag is checked post-templating, and (when set) will result in the final templated result being wrapped in AnsibleUnsafe. ''' def __init__(self, *args, **kwargs): super(AnsibleContext, self).__init__(*args, **kwargs) self.unsafe = False def _is_unsafe(self, val): ''' Our helper function, which will also recursively check dict and list entries due to the fact that they may be repr'd and contain a key or value which contains jinja2 syntax and would otherwise lose the AnsibleUnsafe value. ''' if isinstance(val, dict): for key in val.keys(): if self._is_unsafe(val[key]): return True elif isinstance(val, list): for item in val: if self._is_unsafe(item): return True elif getattr(val, '__UNSAFE__', False) is True: return True return False def _update_unsafe(self, val): if val is not None and not self.unsafe and self._is_unsafe(val): self.unsafe = True def resolve(self, key): ''' The intercepted resolve(), which uses the helper above to set the internal flag whenever an unsafe variable value is returned. ''' val = super(AnsibleContext, self).resolve(key) self._update_unsafe(val) return val def resolve_or_missing(self, key): val = super(AnsibleContext, self).resolve_or_missing(key) self._update_unsafe(val) return val def get_all(self): """Return the complete context as a dict including the exported variables. For optimizations reasons this might not return an actual copy so be careful with using it. This is to prevent from running ``AnsibleJ2Vars`` through dict(): ``dict(self.parent, **self.vars)`` In Ansible this means that ALL variables would be templated in the process of re-creating the parent because ``AnsibleJ2Vars`` templates each variable in its ``__getitem__`` method. Instead we re-create the parent via ``AnsibleJ2Vars.add_locals`` that creates a new ``AnsibleJ2Vars`` copy without templating each variable. This will prevent unnecessarily templating unused variables in cases like setting a local variable and passing it to {% include %} in a template. Also see ``AnsibleJ2Template``and https://github.com/pallets/jinja/commit/d67f0fd4cc2a4af08f51f4466150d49da7798729 """ if not self.vars: return self.parent if not self.parent: return self.vars if isinstance(self.parent, AnsibleJ2Vars): return self.parent.add_locals(self.vars) else: # can this happen in Ansible? return dict(self.parent, **self.vars) class JinjaPluginIntercept(MutableMapping): def __init__(self, delegatee, pluginloader, *args, **kwargs): super(JinjaPluginIntercept, self).__init__(*args, **kwargs) self._delegatee = delegatee self._pluginloader = pluginloader if self._pluginloader.class_name == 'FilterModule': self._method_map_name = 'filters' self._dirname = 'filter' elif self._pluginloader.class_name == 'TestModule': self._method_map_name = 'tests' self._dirname = 'test' self._collection_jinja_func_cache = {} self._ansible_plugins_loaded = False def _load_ansible_plugins(self): if self._ansible_plugins_loaded: return for plugin in self._pluginloader.all(): try: method_map = getattr(plugin, self._method_map_name) self._delegatee.update(method_map()) except Exception as e: display.warning("Skipping %s plugin %s as it seems to be invalid: %r" % (self._dirname, to_text(plugin._original_path), e)) continue if self._pluginloader.class_name == 'FilterModule': for plugin_name, plugin in self._delegatee.items(): if plugin_name in C.STRING_TYPE_FILTERS: self._delegatee[plugin_name] = _wrap_native_text(plugin) else: self._delegatee[plugin_name] = _unroll_iterator(plugin) self._ansible_plugins_loaded = True # FUTURE: we can cache FQ filter/test calls for the entire duration of a run, since a given collection's impl's # aren't supposed to change during a run def __getitem__(self, key): self._load_ansible_plugins() try: if not isinstance(key, string_types): raise ValueError('key must be a string') key = to_native(key) if '.' not in key: # might be a built-in or legacy, check the delegatee dict first, then try for a last-chance base redirect func = self._delegatee.get(key) if func: return func # didn't find it in the pre-built Jinja env, assume it's a former builtin and follow the normal routing path leaf_key = key key = 'ansible.builtin.' + key else: leaf_key = key.split('.')[-1] acr = AnsibleCollectionRef.try_parse_fqcr(key, self._dirname) if not acr: raise KeyError('invalid plugin name: {0}'.format(key)) ts = _get_collection_metadata(acr.collection) # TODO: implement support for collection-backed redirect (currently only builtin) # TODO: implement cycle detection (unified across collection redir as well) routing_entry = ts.get('plugin_routing', {}).get(self._dirname, {}).get(leaf_key, {}) deprecation_entry = routing_entry.get('deprecation') if deprecation_entry: warning_text = deprecation_entry.get('warning_text') removal_date = deprecation_entry.get('removal_date') removal_version = deprecation_entry.get('removal_version') if not warning_text: warning_text = '{0} "{1}" is deprecated'.format(self._dirname, key) display.deprecated(warning_text, version=removal_version, date=removal_date, collection_name=acr.collection) tombstone_entry = routing_entry.get('tombstone') if tombstone_entry: warning_text = tombstone_entry.get('warning_text') removal_date = tombstone_entry.get('removal_date') removal_version = tombstone_entry.get('removal_version') if not warning_text: warning_text = '{0} "{1}" has been removed'.format(self._dirname, key) exc_msg = display.get_deprecation_message(warning_text, version=removal_version, date=removal_date, collection_name=acr.collection, removed=True) raise AnsiblePluginRemovedError(exc_msg) redirect_fqcr = routing_entry.get('redirect', None) if redirect_fqcr: acr = AnsibleCollectionRef.from_fqcr(ref=redirect_fqcr, ref_type=self._dirname) display.vvv('redirecting {0} {1} to {2}.{3}'.format(self._dirname, key, acr.collection, acr.resource)) key = redirect_fqcr # TODO: handle recursive forwarding (not necessary for builtin, but definitely for further collection redirs) func = self._collection_jinja_func_cache.get(key) if func: return func try: pkg = import_module(acr.n_python_package_name) except ImportError: raise KeyError() parent_prefix = acr.collection if acr.subdirs: parent_prefix = '{0}.{1}'.format(parent_prefix, acr.subdirs) # TODO: implement collection-level redirect for dummy, module_name, ispkg in pkgutil.iter_modules(pkg.__path__, prefix=parent_prefix + '.'): if ispkg: continue try: plugin_impl = self._pluginloader.get(module_name) except Exception as e: raise TemplateSyntaxError(to_native(e), 0) try: method_map = getattr(plugin_impl, self._method_map_name) func_items = method_map().items() except Exception as e: display.warning( "Skipping %s plugin %s as it seems to be invalid: %r" % (self._dirname, to_text(plugin_impl._original_path), e), ) continue for func_name, func in func_items: fq_name = '.'.join((parent_prefix, func_name)) # FIXME: detect/warn on intra-collection function name collisions if self._pluginloader.class_name == 'FilterModule': if fq_name.startswith(('ansible.builtin.', 'ansible.legacy.')) and \ func_name in C.STRING_TYPE_FILTERS: self._collection_jinja_func_cache[fq_name] = _wrap_native_text(func) else: self._collection_jinja_func_cache[fq_name] = _unroll_iterator(func) else: self._collection_jinja_func_cache[fq_name] = func function_impl = self._collection_jinja_func_cache[key] return function_impl except AnsiblePluginRemovedError as apre: raise TemplateSyntaxError(to_native(apre), 0) except KeyError: raise except Exception as ex: display.warning('an unexpected error occurred during Jinja2 environment setup: {0}'.format(to_native(ex))) display.vvv('exception during Jinja2 environment setup: {0}'.format(format_exc())) raise TemplateSyntaxError(to_native(ex), 0) def __setitem__(self, key, value): return self._delegatee.__setitem__(key, value) def __delitem__(self, key): raise NotImplementedError() def __iter__(self): # not strictly accurate since we're not counting dynamically-loaded values return iter(self._delegatee) def __len__(self): # not strictly accurate since we're not counting dynamically-loaded values return len(self._delegatee) @_unroll_iterator def _ansible_finalize(thing): """A custom finalize function for jinja2, which prevents None from being returned. This avoids a string of ``"None"`` as ``None`` has no importance in YAML. The function is decorated with ``_unroll_iterator`` so that users are not required to explicitly use ``|list`` to unroll a generator. This only affects the scenario where the final result of templating is a generator, e.g. ``range``, ``dict.items()`` and so on. Filters which can produce a generator in the middle of a template are already wrapped with ``_unroll_generator`` in ``JinjaPluginIntercept``. """ return thing if thing is not None else '' class AnsibleEnvironment(NativeEnvironment): ''' Our custom environment, which simply allows us to override the class-level values for the Template and Context classes used by jinja2 internally. ''' context_class = AnsibleContext template_class = AnsibleJ2Template concat = staticmethod(ansible_eval_concat) def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.filters = JinjaPluginIntercept(self.filters, filter_loader) self.tests = JinjaPluginIntercept(self.tests, test_loader) self.trim_blocks = True self.undefined = AnsibleUndefined self.finalize = _ansible_finalize class AnsibleNativeEnvironment(AnsibleEnvironment): concat = staticmethod(ansible_native_concat) def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.finalize = _unroll_iterator(lambda thing: thing) class Templar: ''' The main class for templating, with the main entry-point of template(). ''' def __init__(self, loader, shared_loader_obj=None, variables=None): # NOTE shared_loader_obj is deprecated, ansible.plugins.loader is used # directly. Keeping the arg for now in case 3rd party code "uses" it. self._loader = loader self._available_variables = {} if variables is None else variables self._cached_result = {} self._fail_on_undefined_errors = C.DEFAULT_UNDEFINED_VAR_BEHAVIOR environment_class = AnsibleNativeEnvironment if C.DEFAULT_JINJA2_NATIVE else AnsibleEnvironment self.environment = environment_class( extensions=self._get_extensions(), loader=FileSystemLoader(loader.get_basedir() if loader else '.'), ) # jinja2 global is inconsistent across versions, this normalizes them self.environment.globals['dict'] = dict # Custom globals self.environment.globals['lookup'] = self._lookup self.environment.globals['query'] = self.environment.globals['q'] = self._query_lookup self.environment.globals['now'] = self._now_datetime self.environment.globals['undef'] = self._make_undefined # the current rendering context under which the templar class is working self.cur_context = None # FIXME this regex should be re-compiled each time variable_start_string and variable_end_string are changed self.SINGLE_VAR = re.compile(r"^%s\s*(\w*)\s*%s$" % (self.environment.variable_start_string, self.environment.variable_end_string)) self.jinja2_native = C.DEFAULT_JINJA2_NATIVE def copy_with_new_env(self, environment_class=AnsibleEnvironment, **kwargs): r"""Creates a new copy of Templar with a new environment. :kwarg environment_class: Environment class used for creating a new environment. :kwarg \*\*kwargs: Optional arguments for the new environment that override existing environment attributes. :returns: Copy of Templar with updated environment. """ # We need to use __new__ to skip __init__, mainly not to create a new # environment there only to override it below new_env = object.__new__(environment_class) new_env.__dict__.update(self.environment.__dict__) new_templar = object.__new__(Templar) new_templar.__dict__.update(self.__dict__) new_templar.environment = new_env new_templar.jinja2_native = environment_class is AnsibleNativeEnvironment mapping = { 'available_variables': new_templar, 'searchpath': new_env.loader, } for key, value in kwargs.items(): obj = mapping.get(key, new_env) try: if value is not None: setattr(obj, key, value) except AttributeError: # Ignore invalid attrs pass return new_templar def _get_extensions(self): ''' Return jinja2 extensions to load. If some extensions are set via jinja_extensions in ansible.cfg, we try to load them with the jinja environment. ''' jinja_exts = [] if C.DEFAULT_JINJA2_EXTENSIONS: # make sure the configuration directive doesn't contain spaces # and split extensions in an array jinja_exts = C.DEFAULT_JINJA2_EXTENSIONS.replace(" ", "").split(',') return jinja_exts @property def available_variables(self): return self._available_variables @available_variables.setter def available_variables(self, variables): ''' Sets the list of template variables this Templar instance will use to template things, so we don't have to pass them around between internal methods. We also clear the template cache here, as the variables are being changed. ''' if not isinstance(variables, Mapping): raise AnsibleAssertionError("the type of 'variables' should be a Mapping but was a %s" % (type(variables))) self._available_variables = variables self._cached_result = {} @contextmanager def set_temporary_context(self, **kwargs): """Context manager used to set temporary templating context, without having to worry about resetting original values afterward Use a keyword that maps to the attr you are setting. Applies to ``self.environment`` by default, to set context on another object, it must be in ``mapping``. """ mapping = { 'available_variables': self, 'searchpath': self.environment.loader, } original = {} for key, value in kwargs.items(): obj = mapping.get(key, self.environment) try: original[key] = getattr(obj, key) if value is not None: setattr(obj, key, value) except AttributeError: # Ignore invalid attrs pass yield for key in original: obj = mapping.get(key, self.environment) setattr(obj, key, original[key]) def template(self, variable, convert_bare=False, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None, convert_data=True, static_vars=None, cache=True, disable_lookups=False): ''' Templates (possibly recursively) any given data as input. If convert_bare is set to True, the given data will be wrapped as a jinja2 variable ('{{foo}}') before being sent through the template engine. ''' static_vars = [] if static_vars is None else static_vars # Don't template unsafe variables, just return them. if hasattr(variable, '__UNSAFE__'): return variable if fail_on_undefined is None: fail_on_undefined = self._fail_on_undefined_errors if convert_bare: variable = self._convert_bare_variable(variable) if isinstance(variable, string_types): if not self.is_possibly_template(variable): return variable # Check to see if the string we are trying to render is just referencing a single # var. In this case we don't want to accidentally change the type of the variable # to a string by using the jinja template renderer. We just want to pass it. only_one = self.SINGLE_VAR.match(variable) if only_one: var_name = only_one.group(1) if var_name in self._available_variables: resolved_val = self._available_variables[var_name] if isinstance(resolved_val, NON_TEMPLATED_TYPES): return resolved_val elif resolved_val is None: return C.DEFAULT_NULL_REPRESENTATION # Using a cache in order to prevent template calls with already templated variables sha1_hash = None if cache: variable_hash = sha1(text_type(variable).encode('utf-8')) options_hash = sha1( ( text_type(preserve_trailing_newlines) + text_type(escape_backslashes) + text_type(fail_on_undefined) + text_type(overrides) ).encode('utf-8') ) sha1_hash = variable_hash.hexdigest() + options_hash.hexdigest() if sha1_hash in self._cached_result: return self._cached_result[sha1_hash] result = self.do_template( variable, preserve_trailing_newlines=preserve_trailing_newlines, escape_backslashes=escape_backslashes, fail_on_undefined=fail_on_undefined, overrides=overrides, disable_lookups=disable_lookups, convert_data=convert_data, ) # we only cache in the case where we have a single variable # name, to make sure we're not putting things which may otherwise # be dynamic in the cache (filters, lookups, etc.) if cache and only_one: self._cached_result[sha1_hash] = result return result elif is_sequence(variable): return [self.template( v, preserve_trailing_newlines=preserve_trailing_newlines, fail_on_undefined=fail_on_undefined, overrides=overrides, disable_lookups=disable_lookups, ) for v in variable] elif isinstance(variable, Mapping): d = {} # we don't use iteritems() here to avoid problems if the underlying dict # changes sizes due to the templating, which can happen with hostvars for k in variable.keys(): if k not in static_vars: d[k] = self.template( variable[k], preserve_trailing_newlines=preserve_trailing_newlines, fail_on_undefined=fail_on_undefined, overrides=overrides, disable_lookups=disable_lookups, ) else: d[k] = variable[k] return d else: return variable def is_template(self, data): '''lets us know if data has a template''' if isinstance(data, string_types): return is_template(data, self.environment) elif isinstance(data, (list, tuple)): for v in data: if self.is_template(v): return True elif isinstance(data, dict): for k in data: if self.is_template(k) or self.is_template(data[k]): return True return False templatable = is_template def is_possibly_template(self, data): return is_possibly_template(data, self.environment) def _convert_bare_variable(self, variable): ''' Wraps a bare string, which may have an attribute portion (ie. foo.bar) in jinja2 variable braces so that it is evaluated properly. ''' if isinstance(variable, string_types): contains_filters = "|" in variable first_part = variable.split("|")[0].split(".")[0].split("[")[0] if (contains_filters or first_part in self._available_variables) and self.environment.variable_start_string not in variable: return "%s%s%s" % (self.environment.variable_start_string, variable, self.environment.variable_end_string) # the variable didn't meet the conditions to be converted, # so just return it as-is return variable def _fail_lookup(self, name, *args, **kwargs): raise AnsibleError("The lookup `%s` was found, however lookups were disabled from templating" % name) def _now_datetime(self, utc=False, fmt=None): '''jinja2 global function to return current datetime, potentially formatted via strftime''' if utc: now = datetime.datetime.utcnow() else: now = datetime.datetime.now() if fmt: return now.strftime(fmt) return now def _query_lookup(self, name, *args, **kwargs): ''' wrapper for lookup, force wantlist true''' kwargs['wantlist'] = True return self._lookup(name, *args, **kwargs) def _lookup(self, name, *args, **kwargs): instance = lookup_loader.get(name, loader=self._loader, templar=self) if instance is None: raise AnsibleError("lookup plugin (%s) not found" % name) wantlist = kwargs.pop('wantlist', False) allow_unsafe = kwargs.pop('allow_unsafe', C.DEFAULT_ALLOW_UNSAFE_LOOKUPS) errors = kwargs.pop('errors', 'strict') loop_terms = listify_lookup_plugin_terms(terms=args, templar=self, loader=self._loader, fail_on_undefined=True, convert_bare=False) # safely catch run failures per #5059 try: ran = instance.run(loop_terms, variables=self._available_variables, **kwargs) except (AnsibleUndefinedVariable, UndefinedError) as e: raise AnsibleUndefinedVariable(e) except AnsibleOptionsError as e: # invalid options given to lookup, just reraise raise e except AnsibleLookupError as e: # lookup handled error but still decided to bail msg = 'Lookup failed but the error is being ignored: %s' % to_native(e) if errors == 'warn': display.warning(msg) elif errors == 'ignore': display.display(msg, log_only=True) else: raise e return [] if wantlist else None except Exception as e: # errors not handled by lookup msg = u"An unhandled exception occurred while running the lookup plugin '%s'. Error was a %s, original message: %s" % \ (name, type(e), to_text(e)) if errors == 'warn': display.warning(msg) elif errors == 'ignore': display.display(msg, log_only=True) else: display.vvv('exception during Jinja2 execution: {0}'.format(format_exc())) raise AnsibleError(to_native(msg), orig_exc=e) return [] if wantlist else None if ran and allow_unsafe is False: if self.cur_context: self.cur_context.unsafe = True if wantlist: return wrap_var(ran) try: if isinstance(ran[0], NativeJinjaText): ran = wrap_var(NativeJinjaText(",".join(ran))) else: ran = wrap_var(",".join(ran)) except TypeError: # Lookup Plugins should always return lists. Throw an error if that's not # the case: if not isinstance(ran, Sequence): raise AnsibleError("The lookup plugin '%s' did not return a list." % name) # The TypeError we can recover from is when the value *inside* of the list # is not a string if len(ran) == 1: ran = wrap_var(ran[0]) else: ran = wrap_var(ran) return ran def _make_undefined(self, hint=None): from jinja2.runtime import Undefined if hint is None or isinstance(hint, Undefined) or hint == '': hint = "Mandatory variable has not been overridden" return AnsibleUndefined(hint) def do_template(self, data, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None, disable_lookups=False, convert_data=False): if self.jinja2_native and not isinstance(data, string_types): return data # For preserving the number of input newlines in the output (used # later in this method) data_newlines = _count_newlines_from_end(data) if fail_on_undefined is None: fail_on_undefined = self._fail_on_undefined_errors has_template_overrides = data.startswith(JINJA2_OVERRIDE) try: # NOTE Creating an overlay that lives only inside do_template means that overrides are not applied # when templating nested variables in AnsibleJ2Vars where Templar.environment is used, not the overlay. # This is historic behavior that is kept for backwards compatibility. if overrides: myenv = self.environment.overlay(overrides) elif has_template_overrides: myenv = self.environment.overlay() else: myenv = self.environment # Get jinja env overrides from template if has_template_overrides: eol = data.find('\n') line = data[len(JINJA2_OVERRIDE):eol] data = data[eol + 1:] for pair in line.split(','): (key, val) = pair.split(':') key = key.strip() setattr(myenv, key, ast.literal_eval(val.strip())) if escape_backslashes: # Allow users to specify backslashes in playbooks as "\\" instead of as "\\\\". data = _escape_backslashes(data, myenv) try: t = myenv.from_string(data) except TemplateSyntaxError as e: raise AnsibleError("template error while templating string: %s. String: %s" % (to_native(e), to_native(data))) except Exception as e: if 'recursion' in to_native(e): raise AnsibleError("recursive loop detected in template string: %s" % to_native(data)) else: return data if disable_lookups: t.globals['query'] = t.globals['q'] = t.globals['lookup'] = self._fail_lookup jvars = AnsibleJ2Vars(self, t.globals) self.cur_context = new_context = t.new_context(jvars, shared=True) rf = t.root_render_func(new_context) try: if not self.jinja2_native and not convert_data: res = ansible_concat(rf) else: res = self.environment.concat(rf) unsafe = getattr(new_context, 'unsafe', False) if unsafe: res = wrap_var(res) except TypeError as te: if 'AnsibleUndefined' in to_native(te): errmsg = "Unable to look up a name or access an attribute in template string (%s).\n" % to_native(data) errmsg += "Make sure your variable name does not contain invalid characters like '-': %s" % to_native(te) raise AnsibleUndefinedVariable(errmsg) else: display.debug("failing because of a type error, template data is: %s" % to_text(data)) raise AnsibleError("Unexpected templating type error occurred on (%s): %s" % (to_native(data), to_native(te))) if isinstance(res, string_types) and preserve_trailing_newlines: # The low level calls above do not preserve the newline # characters at the end of the input data, so we use the # calculate the difference in newlines and append them # to the resulting output for parity # # Using Environment's keep_trailing_newline instead would # result in change in behavior when trailing newlines # would be kept also for included templates, for example: # "Hello {% include 'world.txt' %}!" would render as # "Hello world\n!\n" instead of "Hello world!\n". res_newlines = _count_newlines_from_end(res) if data_newlines > res_newlines: res += self.environment.newline_sequence * (data_newlines - res_newlines) if unsafe: res = wrap_var(res) return res except (UndefinedError, AnsibleUndefinedVariable) as e: if fail_on_undefined: raise AnsibleUndefinedVariable(e) else: display.debug("Ignoring undefined failure: %s" % to_text(e)) return data # for backwards compatibility in case anyone is using old private method directly _do_template = do_template
closed
ansible/ansible
https://github.com/ansible/ansible
77,192
moving ipaddr from netcommon to utils breaks the non-namespaced usage of nthhost (without the collection)
### Summary https://github.com/ansible-collections/ansible.netcommon/pull/359 This change breaks the non-namespaced usage of ipaddr filter (without the collection). example ipwrap ### Issue Type Bug Report ### Component Name ansible ### Ansible Version ```console $ ansible --version code and can become unstable at any point. ansible [core 2.13.0.dev0] (bugfix_redirection de11a1ce78) last updated 2022/03/03 13:38:18 (GMT +550) config file = /Users/amhatre/ansible-collections/playbooks/ansible.cfg configured module search path = ['/Users/amhatre/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/amhatre/dev-workspace/ansible/lib/ansible ansible collection location = /Users/amhatre/ansible-collections/collections executable location = /Users/amhatre/dev-workspace/ansible/bin/ansible python version = 3.8.5 (default, Jan 7 2021, 17:04:44) [Clang 12.0.0 (clang-1200.0.32.28)] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console $ (ansible_from_source1) amhatre@ashwinis-MacBook-Pro playbooks % ansible-config dump --only-changed [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. COLLECTIONS_PATHS(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = ['/Users/amhatre/ansible-collection DEFAULT_HOST_LIST(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = ['/Users/amhatre/ansible-collection HOST_KEY_CHECKING(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = False INTERPRETER_PYTHON(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = /Users/amhatre/ansible_venvs/py3.8 PARAMIKO_LOOK_FOR_KEYS(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = False PERSISTENT_COMMAND_TIMEOUT(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = 100 PERSISTENT_CONNECT_RETRY_TIMEOUT(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = 100 PERSISTENT_CONNECT_TIMEOUT(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = 100 (END) ``` ### OS / Environment mac os ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) - name: Some configuration files require IPv6 addresses to be “wrapped” in square brackets ([ ]) This filter is used to wrap ipv6 address. hosts: localhost connection: ansible.netcommon.network_cli gather_facts: no tasks: - name: Input for IPVwrap plugin ansible.builtin.set_fact: value: - 192.24.2.1 - host.fqdn - ::1 - '' - 192.168.32.0/24 - fe80::100/10 - 42540766412265424405338506004571095040/64 - True - debug: msg: "{{ value|ipwrap }}" ``` ### Expected Results (py3.6.10) amhatre@ashwinis-MacBook-Pro playbooks % ansible-playbook test_ipwrap.yaml PLAY [Some configuration files require IPv6 addresses to be “wrapped” in square brackets ([ ]) This filter is used to wrap ipv6 address.] *** TASK [Input for IPVwrap plugin] ********************************************************************************* ok: [localhost] TASK [debug] **************************************************************************************************** [WARNING]: The value '' is not a valid IP address or network, passing this value to ipaddr filter might result in breaking change in future. [WARNING]: The value 'True' is not a valid IP address or network, passing this value to ipaddr filter might result in breaking change in future. ok: [localhost] => { "msg": [ "192.24.2.1", "host.fqdn", "[::1]", "", "192.168.32.0/24", "[fe80::100]/10", "[2001:db8:32c:faad::]/64", true ] } PLAY RECAP ****************************************************************************************************** localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 (py3.6.10) amhatre@ashwinis-MacBook-Pro playbooks % ### Actual Results ```console (py3.6.10) amhatre@ashwinis-MacBook-Pro playbooks % ansible-playbook test_ipwrap.yaml PLAY [Some configuration files require IPv6 addresses to be “wrapped” in square brackets ([ ]) This filter is used to wrap ipv6 address.] ************************ TASK [Input for IPVwrap plugin] ********************************************************************************************************************************** ok: [localhost] TASK [debug] ***************************************************************************************************************************************************** fatal: [localhost]: FAILED! => {"msg": "template error while templating string: No filter named 'ipwrap'.. String: {{ value|ipwrap }}"} PLAY RECAP ******************************************************************************************************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 (py3.6.10) amhatre@ashwinis-MacBook-Pro playbooks % ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77192
https://github.com/ansible/ansible/pull/77210
50d28de9ba0d7271b966b3888916195cb9d28965
8063643b4cec51a72377da5f3fa354d3ff9e737a
2022-03-03T08:10:08Z
python
2022-03-07T20:39:56Z
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testredirect/meta/runtime.yml
plugin_routing: modules: ping: redirect: testns.testcoll.ping
closed
ansible/ansible
https://github.com/ansible/ansible
77,192
moving ipaddr from netcommon to utils breaks the non-namespaced usage of nthhost (without the collection)
### Summary https://github.com/ansible-collections/ansible.netcommon/pull/359 This change breaks the non-namespaced usage of ipaddr filter (without the collection). example ipwrap ### Issue Type Bug Report ### Component Name ansible ### Ansible Version ```console $ ansible --version code and can become unstable at any point. ansible [core 2.13.0.dev0] (bugfix_redirection de11a1ce78) last updated 2022/03/03 13:38:18 (GMT +550) config file = /Users/amhatre/ansible-collections/playbooks/ansible.cfg configured module search path = ['/Users/amhatre/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/amhatre/dev-workspace/ansible/lib/ansible ansible collection location = /Users/amhatre/ansible-collections/collections executable location = /Users/amhatre/dev-workspace/ansible/bin/ansible python version = 3.8.5 (default, Jan 7 2021, 17:04:44) [Clang 12.0.0 (clang-1200.0.32.28)] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console $ (ansible_from_source1) amhatre@ashwinis-MacBook-Pro playbooks % ansible-config dump --only-changed [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. COLLECTIONS_PATHS(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = ['/Users/amhatre/ansible-collection DEFAULT_HOST_LIST(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = ['/Users/amhatre/ansible-collection HOST_KEY_CHECKING(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = False INTERPRETER_PYTHON(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = /Users/amhatre/ansible_venvs/py3.8 PARAMIKO_LOOK_FOR_KEYS(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = False PERSISTENT_COMMAND_TIMEOUT(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = 100 PERSISTENT_CONNECT_RETRY_TIMEOUT(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = 100 PERSISTENT_CONNECT_TIMEOUT(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = 100 (END) ``` ### OS / Environment mac os ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) - name: Some configuration files require IPv6 addresses to be “wrapped” in square brackets ([ ]) This filter is used to wrap ipv6 address. hosts: localhost connection: ansible.netcommon.network_cli gather_facts: no tasks: - name: Input for IPVwrap plugin ansible.builtin.set_fact: value: - 192.24.2.1 - host.fqdn - ::1 - '' - 192.168.32.0/24 - fe80::100/10 - 42540766412265424405338506004571095040/64 - True - debug: msg: "{{ value|ipwrap }}" ``` ### Expected Results (py3.6.10) amhatre@ashwinis-MacBook-Pro playbooks % ansible-playbook test_ipwrap.yaml PLAY [Some configuration files require IPv6 addresses to be “wrapped” in square brackets ([ ]) This filter is used to wrap ipv6 address.] *** TASK [Input for IPVwrap plugin] ********************************************************************************* ok: [localhost] TASK [debug] **************************************************************************************************** [WARNING]: The value '' is not a valid IP address or network, passing this value to ipaddr filter might result in breaking change in future. [WARNING]: The value 'True' is not a valid IP address or network, passing this value to ipaddr filter might result in breaking change in future. ok: [localhost] => { "msg": [ "192.24.2.1", "host.fqdn", "[::1]", "", "192.168.32.0/24", "[fe80::100]/10", "[2001:db8:32c:faad::]/64", true ] } PLAY RECAP ****************************************************************************************************** localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 (py3.6.10) amhatre@ashwinis-MacBook-Pro playbooks % ### Actual Results ```console (py3.6.10) amhatre@ashwinis-MacBook-Pro playbooks % ansible-playbook test_ipwrap.yaml PLAY [Some configuration files require IPv6 addresses to be “wrapped” in square brackets ([ ]) This filter is used to wrap ipv6 address.] ************************ TASK [Input for IPVwrap plugin] ********************************************************************************************************************************** ok: [localhost] TASK [debug] ***************************************************************************************************************************************************** fatal: [localhost]: FAILED! => {"msg": "template error while templating string: No filter named 'ipwrap'.. String: {{ value|ipwrap }}"} PLAY RECAP ******************************************************************************************************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 (py3.6.10) amhatre@ashwinis-MacBook-Pro playbooks % ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77192
https://github.com/ansible/ansible/pull/77210
50d28de9ba0d7271b966b3888916195cb9d28965
8063643b4cec51a72377da5f3fa354d3ff9e737a
2022-03-03T08:10:08Z
python
2022-03-07T20:39:56Z
test/integration/targets/collections/runme.sh
#!/usr/bin/env bash set -eux export ANSIBLE_COLLECTIONS_PATH=$PWD/collection_root_user:$PWD/collection_root_sys export ANSIBLE_GATHERING=explicit export ANSIBLE_GATHER_SUBSET=minimal export ANSIBLE_HOST_PATTERN_MISMATCH=error unset ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH # ensure we can call collection module ansible localhost -m testns.testcoll.testmodule # ensure we can call collection module with ansible_collections in path ANSIBLE_COLLECTIONS_PATH=$PWD/collection_root_sys/ansible_collections ansible localhost -m testns.testcoll.testmodule echo "--- validating callbacks" # validate FQ callbacks in ansible-playbook ANSIBLE_CALLBACKS_ENABLED=testns.testcoll.usercallback ansible-playbook noop.yml | grep "usercallback says ok" # use adhoc for the rest of these tests, must force it to load other callbacks export ANSIBLE_LOAD_CALLBACK_PLUGINS=1 # validate redirected callback ANSIBLE_CALLBACKS_ENABLED=formerly_core_callback ansible localhost -m debug 2>&1 | grep -- "usercallback says ok" ## validate missing redirected callback ANSIBLE_CALLBACKS_ENABLED=formerly_core_missing_callback ansible localhost -m debug 2>&1 | grep -- "Skipping callback plugin 'formerly_core_missing_callback'" ## validate redirected + removed callback (fatal) ANSIBLE_CALLBACKS_ENABLED=formerly_core_removed_callback ansible localhost -m debug 2>&1 | grep -- "testns.testcoll.removedcallback has been removed" # validate avoiding duplicate loading of callback, even if using diff names [ "$(ANSIBLE_CALLBACKS_ENABLED=testns.testcoll.usercallback,formerly_core_callback ansible localhost -m debug 2>&1 | grep -c 'usercallback says ok')" = "1" ] # ensure non existing callback does not crash ansible ANSIBLE_CALLBACKS_ENABLED=charlie.gomez.notme ansible localhost -m debug 2>&1 | grep -- "Skipping callback plugin 'charlie.gomez.notme'" unset ANSIBLE_LOAD_CALLBACK_PLUGINS # adhoc normally shouldn't load non-default plugins- let's be sure output=$(ANSIBLE_CALLBACKS_ENABLED=testns.testcoll.usercallback ansible localhost -m debug) if [[ "${output}" =~ "usercallback says ok" ]]; then echo fail; exit 1; fi echo "--- validating docs" # test documentation ansible-doc testns.testcoll.testmodule -vvv | grep -- "- normal_doc_frag" # same with symlink ln -s "${PWD}/testcoll2" ./collection_root_sys/ansible_collections/testns/testcoll2 ansible-doc testns.testcoll2.testmodule2 -vvv | grep "Test module" # now test we can list with symlink ansible-doc -l -vvv| grep "testns.testcoll2.testmodule2" echo "testing bad doc_fragments (expected ERROR message follows)" # test documentation failure ansible-doc testns.testcoll.testmodule_bad_docfrags -vvv 2>&1 | grep -- "unknown doc_fragment" echo "--- validating default collection" # test adhoc default collection resolution (use unqualified collection module with playbook dir under its collection) echo "testing adhoc default collection support with explicit playbook dir" ANSIBLE_PLAYBOOK_DIR=./collection_root_user/ansible_collections/testns/testcoll ansible localhost -m testmodule # we need multiple plays, and conditional import_playbook is noisy and causes problems, so choose here which one to use... if [[ ${INVENTORY_PATH} == *.winrm ]]; then export TEST_PLAYBOOK=windows.yml else export TEST_PLAYBOOK=posix.yml echo "testing default collection support" ansible-playbook -i "${INVENTORY_PATH}" collection_root_user/ansible_collections/testns/testcoll/playbooks/default_collection_playbook.yml "$@" fi echo "--- validating collections support in playbooks/roles" # run test playbooks ansible-playbook -i "${INVENTORY_PATH}" -v "${TEST_PLAYBOOK}" "$@" if [[ ${INVENTORY_PATH} != *.winrm ]]; then ansible-playbook -i "${INVENTORY_PATH}" -v invocation_tests.yml "$@" fi echo "--- validating bypass_host_loop with collection search" ansible-playbook -i host1,host2, -v test_bypass_host_loop.yml "$@" echo "--- validating inventory" # test collection inventories ansible-playbook inventory_test.yml -i a.statichost.yml -i redirected.statichost.yml "$@" if [[ ${INVENTORY_PATH} != *.winrm ]]; then # base invocation tests ansible-playbook -i "${INVENTORY_PATH}" -v invocation_tests.yml "$@" # run playbook from collection, test default again, but with FQCN ansible-playbook -i "${INVENTORY_PATH}" testns.testcoll.default_collection_playbook.yml "$@" # run playbook from collection, test default again, but with FQCN and no extension ansible-playbook -i "${INVENTORY_PATH}" testns.testcoll.default_collection_playbook "$@" # run playbook that imports from collection ansible-playbook -i "${INVENTORY_PATH}" import_collection_pb.yml "$@" fi # test collection inventories ansible-playbook inventory_test.yml -i a.statichost.yml -i redirected.statichost.yml "$@" # test plugin loader redirect_list ansible-playbook test_redirect_list.yml -v "$@" # test ansiballz cache dupe ansible-playbook ansiballz_dupe/test_ansiballz_cache_dupe_shortname.yml -v "$@" # test adjacent with --playbook-dir export ANSIBLE_COLLECTIONS_PATH='' ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED=1 ansible-inventory --list --export --playbook-dir=. -v "$@" # use an inventory source with caching enabled ansible-playbook -i a.statichost.yml -i ./cache.statichost.yml -v check_populated_inventory.yml # Check that the inventory source with caching enabled was stored if [[ "$(find ./inventory_cache -type f ! -path "./inventory_cache/.keep" | wc -l)" -ne "1" ]]; then echo "Failed to find the expected single cache" exit 1 fi CACHEFILE="$(find ./inventory_cache -type f ! -path './inventory_cache/.keep')" if [[ $CACHEFILE != ./inventory_cache/prefix_* ]]; then echo "Unexpected cache file" exit 1 fi # Check the cache for the expected hosts if [[ "$(grep -wc "cache_host_a" "$CACHEFILE")" -ne "1" ]]; then echo "Failed to cache host as expected" exit 1 fi if [[ "$(grep -wc "dynamic_host_a" "$CACHEFILE")" -ne "0" ]]; then echo "Cached an incorrect source" exit 1 fi ./vars_plugin_tests.sh ./test_task_resolved_plugin.sh
closed
ansible/ansible
https://github.com/ansible/ansible
77,192
moving ipaddr from netcommon to utils breaks the non-namespaced usage of nthhost (without the collection)
### Summary https://github.com/ansible-collections/ansible.netcommon/pull/359 This change breaks the non-namespaced usage of ipaddr filter (without the collection). example ipwrap ### Issue Type Bug Report ### Component Name ansible ### Ansible Version ```console $ ansible --version code and can become unstable at any point. ansible [core 2.13.0.dev0] (bugfix_redirection de11a1ce78) last updated 2022/03/03 13:38:18 (GMT +550) config file = /Users/amhatre/ansible-collections/playbooks/ansible.cfg configured module search path = ['/Users/amhatre/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/amhatre/dev-workspace/ansible/lib/ansible ansible collection location = /Users/amhatre/ansible-collections/collections executable location = /Users/amhatre/dev-workspace/ansible/bin/ansible python version = 3.8.5 (default, Jan 7 2021, 17:04:44) [Clang 12.0.0 (clang-1200.0.32.28)] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console $ (ansible_from_source1) amhatre@ashwinis-MacBook-Pro playbooks % ansible-config dump --only-changed [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. COLLECTIONS_PATHS(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = ['/Users/amhatre/ansible-collection DEFAULT_HOST_LIST(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = ['/Users/amhatre/ansible-collection HOST_KEY_CHECKING(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = False INTERPRETER_PYTHON(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = /Users/amhatre/ansible_venvs/py3.8 PARAMIKO_LOOK_FOR_KEYS(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = False PERSISTENT_COMMAND_TIMEOUT(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = 100 PERSISTENT_CONNECT_RETRY_TIMEOUT(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = 100 PERSISTENT_CONNECT_TIMEOUT(/Users/amhatre/ansible-collections/playbooks/ansible.cfg) = 100 (END) ``` ### OS / Environment mac os ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) - name: Some configuration files require IPv6 addresses to be “wrapped” in square brackets ([ ]) This filter is used to wrap ipv6 address. hosts: localhost connection: ansible.netcommon.network_cli gather_facts: no tasks: - name: Input for IPVwrap plugin ansible.builtin.set_fact: value: - 192.24.2.1 - host.fqdn - ::1 - '' - 192.168.32.0/24 - fe80::100/10 - 42540766412265424405338506004571095040/64 - True - debug: msg: "{{ value|ipwrap }}" ``` ### Expected Results (py3.6.10) amhatre@ashwinis-MacBook-Pro playbooks % ansible-playbook test_ipwrap.yaml PLAY [Some configuration files require IPv6 addresses to be “wrapped” in square brackets ([ ]) This filter is used to wrap ipv6 address.] *** TASK [Input for IPVwrap plugin] ********************************************************************************* ok: [localhost] TASK [debug] **************************************************************************************************** [WARNING]: The value '' is not a valid IP address or network, passing this value to ipaddr filter might result in breaking change in future. [WARNING]: The value 'True' is not a valid IP address or network, passing this value to ipaddr filter might result in breaking change in future. ok: [localhost] => { "msg": [ "192.24.2.1", "host.fqdn", "[::1]", "", "192.168.32.0/24", "[fe80::100]/10", "[2001:db8:32c:faad::]/64", true ] } PLAY RECAP ****************************************************************************************************** localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 (py3.6.10) amhatre@ashwinis-MacBook-Pro playbooks % ### Actual Results ```console (py3.6.10) amhatre@ashwinis-MacBook-Pro playbooks % ansible-playbook test_ipwrap.yaml PLAY [Some configuration files require IPv6 addresses to be “wrapped” in square brackets ([ ]) This filter is used to wrap ipv6 address.] ************************ TASK [Input for IPVwrap plugin] ********************************************************************************************************************************** ok: [localhost] TASK [debug] ***************************************************************************************************************************************************** fatal: [localhost]: FAILED! => {"msg": "template error while templating string: No filter named 'ipwrap'.. String: {{ value|ipwrap }}"} PLAY RECAP ******************************************************************************************************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 (py3.6.10) amhatre@ashwinis-MacBook-Pro playbooks % ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77192
https://github.com/ansible/ansible/pull/77210
50d28de9ba0d7271b966b3888916195cb9d28965
8063643b4cec51a72377da5f3fa354d3ff9e737a
2022-03-03T08:10:08Z
python
2022-03-07T20:39:56Z
test/integration/targets/collections/test_collection_meta.yml
- hosts: localhost gather_facts: no collections: - testns.testcoll vars: # redirect connection ansible_connection: testns.testcoll.redirected_local tasks: - assert: that: ('data' | testns.testcoll.testfilter) == 'data_via_testfilter_from_userdir' # redirect module (multiple levels) - multilevel1: # redirect action - uses_redirected_action: # redirect import (consumed via action) - uses_redirected_import: # redirect lookup - assert: that: lookup('formerly_core_lookup') == 'mylookup_from_user_dir' # redirect filter - assert: that: ('yes' | formerly_core_filter) == True # legacy filter should mask redirected - assert: that: ('' | formerly_core_masked_filter) == 'hello from overridden formerly_core_masked_filter' # redirect test - assert: that: - "'stuff' is formerly_core_test('tuf')" - "'hello override' is formerly_core_masked_test" # redirect module (formerly internal) - formerly_core_ping: # redirect module from collection (with subdir) - testns.testcoll.module_subdir.subdir_ping_module: # redirect module_utils plugin (consumed via module) - uses_core_redirected_mu: # deprecated module (issues warning) - deprecated_ping: # redirect module (internal alias) - aliased_ping: # redirect module (cycle detection, fatal) # - looped_ping: # removed module (fatal) # - dead_ping:
closed
ansible/ansible
https://github.com/ansible/ansible
77,241
get_url documentation formats "not" incorrectly
### Summary The `get_url` module documentation has extraneous formatting in the "not" word in: ``` NTLM authentication is C(not) supported even if the GSSAPI mech for NTLM has been installed. ``` As it is a part of a sentence, I suspect the "not" here should not have any specific formatting. ### Issue Type Documentation Report ### Component Name get_url ### Ansible Version ```console $ ansible --version irrelevant ``` ### Configuration ```console $ ansible-config dump --only-changed irrelevant ``` ### OS / Environment irrelevant ### Additional Information irrelevant ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77241
https://github.com/ansible/ansible/pull/77247
3c72aa32d6bd6c3b592d8b47c68ab9de922f170a
496f51ceacdebb76a91bda2973ae35f5afae90de
2022-03-09T05:49:24Z
python
2022-03-10T21:23:31Z
lib/ansible/modules/get_url.py
# -*- coding: utf-8 -*- # Copyright: (c) 2012, Jan-Piet Mens <jpmens () gmail.com> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type DOCUMENTATION = r''' --- module: get_url short_description: Downloads files from HTTP, HTTPS, or FTP to node description: - Downloads files from HTTP, HTTPS, or FTP to the remote server. The remote server I(must) have direct access to the remote resource. - By default, if an environment variable C(<protocol>_proxy) is set on the target host, requests will be sent through that proxy. This behaviour can be overridden by setting a variable for this task (see R(setting the environment,playbooks_environment)), or by using the use_proxy option. - HTTP redirects can redirect from HTTP to HTTPS so you should be sure that your proxy environment for both protocols is correct. - From Ansible 2.4 when run with C(--check), it will do a HEAD request to validate the URL but will not download the entire file or verify it against hashes and will report incorrect changed status. - For Windows targets, use the M(ansible.windows.win_get_url) module instead. version_added: '0.6' options: url: description: - HTTP, HTTPS, or FTP URL in the form (http|https|ftp)://[user[:pass]]@host.domain[:port]/path type: str required: true dest: description: - Absolute path of where to download the file to. - If C(dest) is a directory, either the server provided filename or, if none provided, the base name of the URL on the remote server will be used. If a directory, C(force) has no effect. - If C(dest) is a directory, the file will always be downloaded (regardless of the C(force) and C(checksum) option), but replaced only if the contents changed. type: path required: true tmp_dest: description: - Absolute path of where temporary file is downloaded to. - When run on Ansible 2.5 or greater, path defaults to ansible's remote_tmp setting - When run on Ansible prior to 2.5, it defaults to C(TMPDIR), C(TEMP) or C(TMP) env variables or a platform specific value. - U(https://docs.python.org/3/library/tempfile.html#tempfile.tempdir) type: path version_added: '2.1' force: description: - If C(yes) and C(dest) is not a directory, will download the file every time and replace the file if the contents change. If C(no), the file will only be downloaded if the destination does not exist. Generally should be C(yes) only for small local files. - Prior to 0.6, this module behaved as if C(yes) was the default. type: bool default: no version_added: '0.7' backup: description: - Create a backup file including the timestamp information so you can get the original file back if you somehow clobbered it incorrectly. type: bool default: no version_added: '2.1' sha256sum: description: - If a SHA-256 checksum is passed to this parameter, the digest of the destination file will be calculated after it is downloaded to ensure its integrity and verify that the transfer completed successfully. This option is deprecated and will be removed in version 2.14. Use option C(checksum) instead. default: '' type: str version_added: "1.3" checksum: description: - 'If a checksum is passed to this parameter, the digest of the destination file will be calculated after it is downloaded to ensure its integrity and verify that the transfer completed successfully. Format: <algorithm>:<checksum|url>, e.g. checksum="sha256:D98291AC[...]B6DC7B97", checksum="sha256:http://example.com/path/sha256sum.txt"' - If you worry about portability, only the sha1 algorithm is available on all platforms and python versions. - The third party hashlib library can be installed for access to additional algorithms. - Additionally, if a checksum is passed to this parameter, and the file exist under the C(dest) location, the I(destination_checksum) would be calculated, and if checksum equals I(destination_checksum), the file download would be skipped (unless C(force) is true). If the checksum does not equal I(destination_checksum), the destination file is deleted. type: str default: '' version_added: "2.0" use_proxy: description: - if C(no), it will not use a proxy, even if one is defined in an environment variable on the target hosts. type: bool default: yes validate_certs: description: - If C(no), SSL certificates will not be validated. - This should only be used on personally controlled sites using self-signed certificates. type: bool default: yes timeout: description: - Timeout in seconds for URL request. type: int default: 10 version_added: '1.8' headers: description: - Add custom HTTP headers to a request in hash/dict format. - The hash/dict format was added in Ansible 2.6. - Previous versions used a C("key:value,key:value") string format. - The C("key:value,key:value") string format is deprecated and has been removed in version 2.10. type: dict version_added: '2.0' url_username: description: - The username for use in HTTP basic authentication. - This parameter can be used without C(url_password) for sites that allow empty passwords. - Since version 2.8 you can also use the C(username) alias for this option. type: str aliases: ['username'] version_added: '1.6' url_password: description: - The password for use in HTTP basic authentication. - If the C(url_username) parameter is not specified, the C(url_password) parameter will not be used. - Since version 2.8 you can also use the 'password' alias for this option. type: str aliases: ['password'] version_added: '1.6' force_basic_auth: description: - Force the sending of the Basic authentication header upon initial request. - httplib2, the library used by the uri module only sends authentication information when a webservice responds to an initial request with a 401 status. Since some basic auth services do not properly send a 401, logins will fail. type: bool default: no version_added: '2.0' client_cert: description: - PEM formatted certificate chain file to be used for SSL client authentication. - This file can also include the key as well, and if the key is included, C(client_key) is not required. type: path version_added: '2.4' client_key: description: - PEM formatted file that contains your private key to be used for SSL client authentication. - If C(client_cert) contains both the certificate and key, this option is not required. type: path version_added: '2.4' http_agent: description: - Header to identify as, generally appears in web server logs. type: str default: ansible-httpget unredirected_headers: description: - A list of header names that will not be sent on subsequent redirected requests. This list is case insensitive. By default all headers will be redirected. In some cases it may be beneficial to list headers such as C(Authorization) here to avoid potential credential exposure. default: [] type: list elements: str version_added: '2.12' use_gssapi: description: - Use GSSAPI to perform the authentication, typically this is for Kerberos or Kerberos through Negotiate authentication. - Requires the Python library L(gssapi,https://github.com/pythongssapi/python-gssapi) to be installed. - Credentials for GSSAPI can be specified with I(url_username)/I(url_password) or with the GSSAPI env var C(KRB5CCNAME) that specified a custom Kerberos credential cache. - NTLM authentication is C(not) supported even if the GSSAPI mech for NTLM has been installed. type: bool default: no version_added: '2.11' # informational: requirements for nodes extends_documentation_fragment: - files - action_common_attributes attributes: check_mode: details: the changed status will reflect comparison to an empty source file support: partial diff_mode: support: none platform: platforms: posix notes: - For Windows targets, use the M(ansible.windows.win_get_url) module instead. seealso: - module: ansible.builtin.uri - module: ansible.windows.win_get_url author: - Jan-Piet Mens (@jpmens) ''' EXAMPLES = r''' - name: Download foo.conf ansible.builtin.get_url: url: http://example.com/path/file.conf dest: /etc/foo.conf mode: '0440' - name: Download file and force basic auth ansible.builtin.get_url: url: http://example.com/path/file.conf dest: /etc/foo.conf force_basic_auth: yes - name: Download file with custom HTTP headers ansible.builtin.get_url: url: http://example.com/path/file.conf dest: /etc/foo.conf headers: key1: one key2: two - name: Download file with check (sha256) ansible.builtin.get_url: url: http://example.com/path/file.conf dest: /etc/foo.conf checksum: sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c - name: Download file with check (md5) ansible.builtin.get_url: url: http://example.com/path/file.conf dest: /etc/foo.conf checksum: md5:66dffb5228a211e61d6d7ef4a86f5758 - name: Download file with checksum url (sha256) ansible.builtin.get_url: url: http://example.com/path/file.conf dest: /etc/foo.conf checksum: sha256:http://example.com/path/sha256sum.txt - name: Download file from a file path ansible.builtin.get_url: url: file:///tmp/afile.txt dest: /tmp/afilecopy.txt - name: < Fetch file that requires authentication. username/password only available since 2.8, in older versions you need to use url_username/url_password ansible.builtin.get_url: url: http://example.com/path/file.conf dest: /etc/foo.conf username: bar password: '{{ mysecret }}' ''' RETURN = r''' backup_file: description: name of backup file created after download returned: changed and if backup=yes type: str sample: /path/to/file.txt.2015-02-12@22:09~ checksum_dest: description: sha1 checksum of the file after copy returned: success type: str sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827 checksum_src: description: sha1 checksum of the file returned: success type: str sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827 dest: description: destination file/path returned: success type: str sample: /path/to/file.txt elapsed: description: The number of seconds that elapsed while performing the download returned: always type: int sample: 23 gid: description: group id of the file returned: success type: int sample: 100 group: description: group of the file returned: success type: str sample: "httpd" md5sum: description: md5 checksum of the file after download returned: when supported type: str sample: "2a5aeecc61dc98c4d780b14b330e3282" mode: description: permissions of the target returned: success type: str sample: "0644" msg: description: the HTTP message from the request returned: always type: str sample: OK (unknown bytes) owner: description: owner of the file returned: success type: str sample: httpd secontext: description: the SELinux security context of the file returned: success type: str sample: unconfined_u:object_r:user_tmp_t:s0 size: description: size of the target returned: success type: int sample: 1220 src: description: source file used after download returned: always type: str sample: /tmp/tmpAdFLdV state: description: state of the target returned: success type: str sample: file status_code: description: the HTTP status code from the request returned: always type: int sample: 200 uid: description: owner id of the file, after execution returned: success type: int sample: 100 url: description: the actual URL used for the request returned: always type: str sample: https://www.ansible.com/ ''' import datetime import os import re import shutil import tempfile import traceback from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.six.moves.urllib.parse import urlsplit from ansible.module_utils._text import to_native from ansible.module_utils.urls import fetch_url, url_argument_spec # ============================================================== # url handling def url_filename(url): fn = os.path.basename(urlsplit(url)[2]) if fn == '': return 'index.html' return fn def url_get(module, url, dest, use_proxy, last_mod_time, force, timeout=10, headers=None, tmp_dest='', method='GET', unredirected_headers=None): """ Download data from the url and store in a temporary file. Return (tempfile, info about the request) """ start = datetime.datetime.utcnow() rsp, info = fetch_url(module, url, use_proxy=use_proxy, force=force, last_mod_time=last_mod_time, timeout=timeout, headers=headers, method=method, unredirected_headers=unredirected_headers) elapsed = (datetime.datetime.utcnow() - start).seconds if info['status'] == 304: module.exit_json(url=url, dest=dest, changed=False, msg=info.get('msg', ''), status_code=info['status'], elapsed=elapsed) # Exceptions in fetch_url may result in a status -1, the ensures a proper error to the user in all cases if info['status'] == -1: module.fail_json(msg=info['msg'], url=url, dest=dest, elapsed=elapsed) if info['status'] != 200 and not url.startswith('file:/') and not (url.startswith('ftp:/') and info.get('msg', '').startswith('OK')): module.fail_json(msg="Request failed", status_code=info['status'], response=info['msg'], url=url, dest=dest, elapsed=elapsed) # create a temporary file and copy content to do checksum-based replacement if tmp_dest: # tmp_dest should be an existing dir tmp_dest_is_dir = os.path.isdir(tmp_dest) if not tmp_dest_is_dir: if os.path.exists(tmp_dest): module.fail_json(msg="%s is a file but should be a directory." % tmp_dest, elapsed=elapsed) else: module.fail_json(msg="%s directory does not exist." % tmp_dest, elapsed=elapsed) else: tmp_dest = module.tmpdir fd, tempname = tempfile.mkstemp(dir=tmp_dest) f = os.fdopen(fd, 'wb') try: shutil.copyfileobj(rsp, f) except Exception as e: os.remove(tempname) module.fail_json(msg="failed to create temporary content file: %s" % to_native(e), elapsed=elapsed, exception=traceback.format_exc()) f.close() rsp.close() return tempname, info def extract_filename_from_headers(headers): """ Extracts a filename from the given dict of HTTP headers. Looks for the content-disposition header and applies a regex. Returns the filename if successful, else None.""" cont_disp_regex = 'attachment; ?filename="?([^"]+)' res = None if 'content-disposition' in headers: cont_disp = headers['content-disposition'] match = re.match(cont_disp_regex, cont_disp) if match: res = match.group(1) # Try preventing any funny business. res = os.path.basename(res) return res def is_url(checksum): """ Returns True if checksum value has supported URL scheme, else False.""" supported_schemes = ('http', 'https', 'ftp', 'file') return urlsplit(checksum).scheme in supported_schemes # ============================================================== # main def main(): argument_spec = url_argument_spec() # setup aliases argument_spec['url_username']['aliases'] = ['username'] argument_spec['url_password']['aliases'] = ['password'] argument_spec.update( url=dict(type='str', required=True), dest=dict(type='path', required=True), backup=dict(type='bool', default=False), sha256sum=dict(type='str', default=''), checksum=dict(type='str', default=''), timeout=dict(type='int', default=10), headers=dict(type='dict'), tmp_dest=dict(type='path'), unredirected_headers=dict(type='list', elements='str', default=[]), ) module = AnsibleModule( # not checking because of daisy chain to file module argument_spec=argument_spec, add_file_common_args=True, supports_check_mode=True, mutually_exclusive=[['checksum', 'sha256sum']], ) if module.params.get('sha256sum'): module.deprecate('The parameter "sha256sum" has been deprecated and will be removed, use "checksum" instead', version='2.14', collection_name='ansible.builtin') url = module.params['url'] dest = module.params['dest'] backup = module.params['backup'] force = module.params['force'] sha256sum = module.params['sha256sum'] checksum = module.params['checksum'] use_proxy = module.params['use_proxy'] timeout = module.params['timeout'] headers = module.params['headers'] tmp_dest = module.params['tmp_dest'] unredirected_headers = module.params['unredirected_headers'] result = dict( changed=False, checksum_dest=None, checksum_src=None, dest=dest, elapsed=0, url=url, ) dest_is_dir = os.path.isdir(dest) last_mod_time = None # workaround for usage of deprecated sha256sum parameter if sha256sum: checksum = 'sha256:%s' % (sha256sum) # checksum specified, parse for algorithm and checksum if checksum: try: algorithm, checksum = checksum.split(':', 1) except ValueError: module.fail_json(msg="The checksum parameter has to be in format <algorithm>:<checksum>", **result) if is_url(checksum): checksum_url = checksum # download checksum file to checksum_tmpsrc checksum_tmpsrc, checksum_info = url_get(module, checksum_url, dest, use_proxy, last_mod_time, force, timeout, headers, tmp_dest, unredirected_headers=unredirected_headers) with open(checksum_tmpsrc) as f: lines = [line.rstrip('\n') for line in f] os.remove(checksum_tmpsrc) checksum_map = [] for line in lines: # Split by one whitespace to keep the leading type char ' ' (whitespace) for text and '*' for binary parts = line.split(" ", 1) if len(parts) == 2: # Remove the leading type char, we expect if parts[1].startswith((" ", "*",)): parts[1] = parts[1][1:] # Append checksum and path without potential leading './' checksum_map.append((parts[0], parts[1].lstrip("./"))) filename = url_filename(url) # Look through each line in the checksum file for a hash corresponding to # the filename in the url, returning the first hash that is found. for cksum in (s for (s, f) in checksum_map if f == filename): checksum = cksum break else: checksum = None if checksum is None: module.fail_json(msg="Unable to find a checksum for file '%s' in '%s'" % (filename, checksum_url)) # Remove any non-alphanumeric characters, including the infamous # Unicode zero-width space checksum = re.sub(r'\W+', '', checksum).lower() # Ensure the checksum portion is a hexdigest try: int(checksum, 16) except ValueError: module.fail_json(msg='The checksum format is invalid', **result) if not dest_is_dir and os.path.exists(dest): checksum_mismatch = False # If the download is not forced and there is a checksum, allow # checksum match to skip the download. if not force and checksum != '': destination_checksum = module.digest_from_file(dest, algorithm) if checksum != destination_checksum: checksum_mismatch = True # Not forcing redownload, unless checksum does not match if not force and checksum and not checksum_mismatch: # Not forcing redownload, unless checksum does not match # allow file attribute changes file_args = module.load_file_common_arguments(module.params, path=dest) result['changed'] = module.set_fs_attributes_if_different(file_args, False) if result['changed']: module.exit_json(msg="file already exists but file attributes changed", **result) module.exit_json(msg="file already exists", **result) # If the file already exists, prepare the last modified time for the # request. mtime = os.path.getmtime(dest) last_mod_time = datetime.datetime.utcfromtimestamp(mtime) # If the checksum does not match we have to force the download # because last_mod_time may be newer than on remote if checksum_mismatch: force = True # download to tmpsrc start = datetime.datetime.utcnow() method = 'HEAD' if module.check_mode else 'GET' tmpsrc, info = url_get(module, url, dest, use_proxy, last_mod_time, force, timeout, headers, tmp_dest, method, unredirected_headers=unredirected_headers) result['elapsed'] = (datetime.datetime.utcnow() - start).seconds result['src'] = tmpsrc # Now the request has completed, we can finally generate the final # destination file name from the info dict. if dest_is_dir: filename = extract_filename_from_headers(info) if not filename: # Fall back to extracting the filename from the URL. # Pluck the URL from the info, since a redirect could have changed # it. filename = url_filename(info['url']) dest = os.path.join(dest, filename) result['dest'] = dest # raise an error if there is no tmpsrc file if not os.path.exists(tmpsrc): os.remove(tmpsrc) module.fail_json(msg="Request failed", status_code=info['status'], response=info['msg'], **result) if not os.access(tmpsrc, os.R_OK): os.remove(tmpsrc) module.fail_json(msg="Source %s is not readable" % (tmpsrc), **result) result['checksum_src'] = module.sha1(tmpsrc) # check if there is no dest file if os.path.exists(dest): # raise an error if copy has no permission on dest if not os.access(dest, os.W_OK): os.remove(tmpsrc) module.fail_json(msg="Destination %s is not writable" % (dest), **result) if not os.access(dest, os.R_OK): os.remove(tmpsrc) module.fail_json(msg="Destination %s is not readable" % (dest), **result) result['checksum_dest'] = module.sha1(dest) else: if not os.path.exists(os.path.dirname(dest)): os.remove(tmpsrc) module.fail_json(msg="Destination %s does not exist" % (os.path.dirname(dest)), **result) if not os.access(os.path.dirname(dest), os.W_OK): os.remove(tmpsrc) module.fail_json(msg="Destination %s is not writable" % (os.path.dirname(dest)), **result) if module.check_mode: if os.path.exists(tmpsrc): os.remove(tmpsrc) result['changed'] = ('checksum_dest' not in result or result['checksum_src'] != result['checksum_dest']) module.exit_json(msg=info.get('msg', ''), **result) backup_file = None if result['checksum_src'] != result['checksum_dest']: try: if backup: if os.path.exists(dest): backup_file = module.backup_local(dest) module.atomic_move(tmpsrc, dest, unsafe_writes=module.params['unsafe_writes']) except Exception as e: if os.path.exists(tmpsrc): os.remove(tmpsrc) module.fail_json(msg="failed to copy %s to %s: %s" % (tmpsrc, dest, to_native(e)), exception=traceback.format_exc(), **result) result['changed'] = True else: result['changed'] = False if os.path.exists(tmpsrc): os.remove(tmpsrc) if checksum != '': destination_checksum = module.digest_from_file(dest, algorithm) if checksum != destination_checksum: os.remove(dest) module.fail_json(msg="The checksum for %s did not match %s; it was %s." % (dest, checksum, destination_checksum), **result) # allow file attribute changes file_args = module.load_file_common_arguments(module.params, path=dest) result['changed'] = module.set_fs_attributes_if_different(file_args, result['changed']) # Backwards compat only. We'll return None on FIPS enabled systems try: result['md5sum'] = module.md5(dest) except ValueError: result['md5sum'] = None if backup_file: result['backup_file'] = backup_file # Mission complete module.exit_json(msg=info.get('msg', ''), status_code=info.get('status', ''), **result) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
76,464
ansible-test: development environment no longer works
### Summary I am developing modules using the latest stable versions of Ansible on either Python 3.8 or 3.9. Both versions are giving errors. ### Issue Type Bug Report ### Component Name ansible ### Ansible Version ```console ansible [core 2.12.0] config file = /Users/kbreit/Documents/Programming/ansible_collections/cisco/meraki/ansible.cfg configured module search path = ['/Users/kbreit/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/kbreit/.pyenv/versions/3.9.7/envs/ansible-development-latest/lib/python3.9/site-packages/ansible ansible collection location = /Users/kbreit/.ansible/collections:/usr/share/ansible/collections executable location = /Users/kbreit/.pyenv/versions/ansible-development-latest/bin/ansible python version = 3.9.7 (default, Nov 7 2021, 06:44:13) [Clang 12.0.5 (clang-1205.0.22.9)] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /Users/kbreit/pass.txt GALAXY_SERVER_LIST(/Users/kbreit/Documents/Programming/ansible_collections/cisco/meraki/ansible.cfg) = ['automation_hub'] ``` ### OS / Environment macOS and Ubuntu 21.04 ### Steps to Reproduce Reproducing it largely depends on ansible-test. However... <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) (ansible-development-latest) meraki [master●●] % ansible-test network-integration --allow-unsupported --python 3.9 --docker default meraki_network ``` ### Expected Results I expect the `meraki_network` module (or whatever module I'm testing) to execute in full. ### Actual Results ```console Starting new "ansible-test-controller-pVmIMX5s" container. Adding "ansible-test-controller-pVmIMX5s" to container database. NOTICE: Sourcing inventory file "tests/integration/inventory.networking" from "/Users/kbreit/Documents/Programming/ansible_collections/cisco/meraki/tests/integration/inventory.networking". Traceback (most recent call last): File "/root/ansible/bin/ansible-test", line 42, in <module> main() File "/root/ansible/bin/ansible-test", line 33, in main cli_main() File "/root/ansible/test/lib/ansible_test/_internal/__init__.py", line 70, in main args.func(config) File "/root/ansible/test/lib/ansible_test/_internal/commands/integration/network.py", line 73, in command_network_integration command_integration_filtered(args, host_state, internal_targets, all_targets, inventory_path) File "/root/ansible/test/lib/ansible_test/_internal/commands/integration/__init__.py", line 458, in command_integration_filtered create_inventory(args, host_state, inventory_path, target) File "/root/ansible/test/lib/ansible_test/_internal/commands/integration/__init__.py", line 382, in create_inventory create_network_inventory(args, inventory_path, target_profiles) File "/root/ansible/test/lib/ansible_test/_internal/inventory.py", line 90, in create_network_inventory shutil.copyfile(first.config.path, path) File "/usr/lib/python3.9/shutil.py", line 243, in copyfile if _samefile(src, dst): File "/usr/lib/python3.9/shutil.py", line 220, in _samefile return os.path.samefile(src, dst) File "/usr/lib/python3.9/genericpath.py", line 100, in samefile s1 = os.stat(f1) TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType ERROR: Command "docker exec -it ansible-test-controller-pVmIMX5s /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible_collections/cisco/meraki LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test network-integration --containers '{}' --allow-unsupported meraki_network --metadata tests/output/.tmp/metadata-os7__ecv.json --truncate 137 --color yes --host-path tests/output/.tmp/host-_fjd92yd" returned exit status 1. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76464
https://github.com/ansible/ansible/pull/77255
b60a5eefb2bb93051b8b939889eb3a26e3319c7c
e8afdac06e3cdbb52885ec15660ea265e62d63ab
2021-12-04T02:09:37Z
python
2022-03-11T02:17:49Z
changelogs/fragments/ansible-test-delegation-inventory.yaml
closed
ansible/ansible
https://github.com/ansible/ansible
76,464
ansible-test: development environment no longer works
### Summary I am developing modules using the latest stable versions of Ansible on either Python 3.8 or 3.9. Both versions are giving errors. ### Issue Type Bug Report ### Component Name ansible ### Ansible Version ```console ansible [core 2.12.0] config file = /Users/kbreit/Documents/Programming/ansible_collections/cisco/meraki/ansible.cfg configured module search path = ['/Users/kbreit/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/kbreit/.pyenv/versions/3.9.7/envs/ansible-development-latest/lib/python3.9/site-packages/ansible ansible collection location = /Users/kbreit/.ansible/collections:/usr/share/ansible/collections executable location = /Users/kbreit/.pyenv/versions/ansible-development-latest/bin/ansible python version = 3.9.7 (default, Nov 7 2021, 06:44:13) [Clang 12.0.5 (clang-1205.0.22.9)] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /Users/kbreit/pass.txt GALAXY_SERVER_LIST(/Users/kbreit/Documents/Programming/ansible_collections/cisco/meraki/ansible.cfg) = ['automation_hub'] ``` ### OS / Environment macOS and Ubuntu 21.04 ### Steps to Reproduce Reproducing it largely depends on ansible-test. However... <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) (ansible-development-latest) meraki [master●●] % ansible-test network-integration --allow-unsupported --python 3.9 --docker default meraki_network ``` ### Expected Results I expect the `meraki_network` module (or whatever module I'm testing) to execute in full. ### Actual Results ```console Starting new "ansible-test-controller-pVmIMX5s" container. Adding "ansible-test-controller-pVmIMX5s" to container database. NOTICE: Sourcing inventory file "tests/integration/inventory.networking" from "/Users/kbreit/Documents/Programming/ansible_collections/cisco/meraki/tests/integration/inventory.networking". Traceback (most recent call last): File "/root/ansible/bin/ansible-test", line 42, in <module> main() File "/root/ansible/bin/ansible-test", line 33, in main cli_main() File "/root/ansible/test/lib/ansible_test/_internal/__init__.py", line 70, in main args.func(config) File "/root/ansible/test/lib/ansible_test/_internal/commands/integration/network.py", line 73, in command_network_integration command_integration_filtered(args, host_state, internal_targets, all_targets, inventory_path) File "/root/ansible/test/lib/ansible_test/_internal/commands/integration/__init__.py", line 458, in command_integration_filtered create_inventory(args, host_state, inventory_path, target) File "/root/ansible/test/lib/ansible_test/_internal/commands/integration/__init__.py", line 382, in create_inventory create_network_inventory(args, inventory_path, target_profiles) File "/root/ansible/test/lib/ansible_test/_internal/inventory.py", line 90, in create_network_inventory shutil.copyfile(first.config.path, path) File "/usr/lib/python3.9/shutil.py", line 243, in copyfile if _samefile(src, dst): File "/usr/lib/python3.9/shutil.py", line 220, in _samefile return os.path.samefile(src, dst) File "/usr/lib/python3.9/genericpath.py", line 100, in samefile s1 = os.stat(f1) TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType ERROR: Command "docker exec -it ansible-test-controller-pVmIMX5s /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible_collections/cisco/meraki LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test network-integration --containers '{}' --allow-unsupported meraki_network --metadata tests/output/.tmp/metadata-os7__ecv.json --truncate 137 --color yes --host-path tests/output/.tmp/host-_fjd92yd" returned exit status 1. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76464
https://github.com/ansible/ansible/pull/77255
b60a5eefb2bb93051b8b939889eb3a26e3319c7c
e8afdac06e3cdbb52885ec15660ea265e62d63ab
2021-12-04T02:09:37Z
python
2022-03-11T02:17:49Z
test/lib/ansible_test/_internal/commands/integration/__init__.py
"""Ansible integration test infrastructure.""" from __future__ import annotations import contextlib import datetime import json import os import re import shutil import tempfile import time import typing as t from ...encoding import ( to_bytes, ) from ...ansible_util import ( ansible_environment, ) from ...executor import ( get_changes_filter, AllTargetsSkipped, Delegate, ListTargets, ) from ...python_requirements import ( install_requirements, ) from ...ci import ( get_ci_provider, ) from ...target import ( analyze_integration_target_dependencies, walk_integration_targets, IntegrationTarget, walk_internal_targets, TIntegrationTarget, IntegrationTargetType, ) from ...config import ( IntegrationConfig, NetworkIntegrationConfig, PosixIntegrationConfig, WindowsIntegrationConfig, TIntegrationConfig, ) from ...io import ( make_dirs, read_text_file, ) from ...util import ( ApplicationError, display, SubprocessError, remove_tree, ) from ...util_common import ( named_temporary_file, ResultType, run_command, write_json_test_results, check_pyyaml, ) from ...coverage_util import ( cover_python, ) from ...cache import ( CommonCache, ) from .cloud import ( CloudEnvironmentConfig, cloud_filter, cloud_init, get_cloud_environment, get_cloud_platforms, ) from ...data import ( data_context, ) from ...host_configs import ( OriginConfig, ) from ...host_profiles import ( ControllerProfile, HostProfile, PosixProfile, SshTargetHostProfile, ) from ...provisioning import ( HostState, prepare_profiles, ) from ...pypi_proxy import ( configure_pypi_proxy, ) from ...inventory import ( create_controller_inventory, create_windows_inventory, create_network_inventory, create_posix_inventory, ) from .filters import ( get_target_filter, ) from .coverage import ( CoverageManager, ) THostProfile = t.TypeVar('THostProfile', bound=HostProfile) def generate_dependency_map(integration_targets): # type: (t.List[IntegrationTarget]) -> t.Dict[str, t.Set[IntegrationTarget]] """Analyze the given list of integration test targets and return a dictionary expressing target names and the targets on which they depend.""" targets_dict = dict((target.name, target) for target in integration_targets) target_dependencies = analyze_integration_target_dependencies(integration_targets) dependency_map = {} # type: t.Dict[str, t.Set[IntegrationTarget]] invalid_targets = set() for dependency, dependents in target_dependencies.items(): dependency_target = targets_dict.get(dependency) if not dependency_target: invalid_targets.add(dependency) continue for dependent in dependents: if dependent not in dependency_map: dependency_map[dependent] = set() dependency_map[dependent].add(dependency_target) if invalid_targets: raise ApplicationError('Non-existent target dependencies: %s' % ', '.join(sorted(invalid_targets))) return dependency_map def get_files_needed(target_dependencies): # type: (t.List[IntegrationTarget]) -> t.List[str] """Return a list of files needed by the given list of target dependencies.""" files_needed = [] # type: t.List[str] for target_dependency in target_dependencies: files_needed += target_dependency.needs_file files_needed = sorted(set(files_needed)) invalid_paths = [path for path in files_needed if not os.path.isfile(path)] if invalid_paths: raise ApplicationError('Invalid "needs/file/*" aliases:\n%s' % '\n'.join(invalid_paths)) return files_needed def check_inventory(args, inventory_path): # type: (IntegrationConfig, str) -> None """Check the given inventory for issues.""" if not isinstance(args.controller, OriginConfig): if os.path.exists(inventory_path): inventory = read_text_file(inventory_path) if 'ansible_ssh_private_key_file' in inventory: display.warning('Use of "ansible_ssh_private_key_file" in inventory with the --docker or --remote option is unsupported and will likely fail.') def get_inventory_relative_path(args): # type: (IntegrationConfig) -> str """Return the inventory path used for the given integration configuration relative to the content root.""" inventory_names = { PosixIntegrationConfig: 'inventory', WindowsIntegrationConfig: 'inventory.winrm', NetworkIntegrationConfig: 'inventory.networking', } # type: t.Dict[t.Type[IntegrationConfig], str] return os.path.join(data_context().content.integration_path, inventory_names[type(args)]) def delegate_inventory(args, inventory_path_src): # type: (IntegrationConfig, str) -> None """Make the given inventory available during delegation.""" if isinstance(args, PosixIntegrationConfig): return def inventory_callback(files): # type: (t.List[t.Tuple[str, str]]) -> None """ Add the inventory file to the payload file list. This will preserve the file during delegation even if it is ignored or is outside the content and install roots. """ inventory_path = get_inventory_relative_path(args) inventory_tuple = inventory_path_src, inventory_path if os.path.isfile(inventory_path_src) and inventory_tuple not in files: originals = [item for item in files if item[1] == inventory_path] if originals: for original in originals: files.remove(original) display.warning('Overriding inventory file "%s" with "%s".' % (inventory_path, inventory_path_src)) else: display.notice('Sourcing inventory file "%s" from "%s".' % (inventory_path, inventory_path_src)) files.append(inventory_tuple) data_context().register_payload_callback(inventory_callback) @contextlib.contextmanager def integration_test_environment( args, # type: IntegrationConfig target, # type: IntegrationTarget inventory_path_src, # type: str ): # type: (...) -> t.Iterator[IntegrationEnvironment] """Context manager that prepares the integration test environment and cleans it up.""" ansible_config_src = args.get_ansible_config() ansible_config_relative = os.path.join(data_context().content.integration_path, '%s.cfg' % args.command) if args.no_temp_workdir or 'no/temp_workdir/' in target.aliases: display.warning('Disabling the temp work dir is a temporary debugging feature that may be removed in the future without notice.') integration_dir = os.path.join(data_context().content.root, data_context().content.integration_path) targets_dir = os.path.join(data_context().content.root, data_context().content.integration_targets_path) inventory_path = inventory_path_src ansible_config = ansible_config_src vars_file = os.path.join(data_context().content.root, data_context().content.integration_vars_path) yield IntegrationEnvironment(data_context().content.root, integration_dir, targets_dir, inventory_path, ansible_config, vars_file) return # When testing a collection, the temporary directory must reside within the collection. # This is necessary to enable support for the default collection for non-collection content (playbooks and roles). root_temp_dir = os.path.join(ResultType.TMP.path, 'integration') prefix = '%s-' % target.name suffix = u'-\u00c5\u00d1\u015a\u00cc\u03b2\u0141\u00c8' if args.no_temp_unicode or 'no/temp_unicode/' in target.aliases: display.warning('Disabling unicode in the temp work dir is a temporary debugging feature that may be removed in the future without notice.') suffix = '-ansible' if args.explain: temp_dir = os.path.join(root_temp_dir, '%stemp%s' % (prefix, suffix)) else: make_dirs(root_temp_dir) temp_dir = tempfile.mkdtemp(prefix=prefix, suffix=suffix, dir=root_temp_dir) try: display.info('Preparing temporary directory: %s' % temp_dir, verbosity=2) inventory_relative_path = get_inventory_relative_path(args) inventory_path = os.path.join(temp_dir, inventory_relative_path) cache = IntegrationCache(args) target_dependencies = sorted([target] + list(cache.dependency_map.get(target.name, set()))) files_needed = get_files_needed(target_dependencies) integration_dir = os.path.join(temp_dir, data_context().content.integration_path) targets_dir = os.path.join(temp_dir, data_context().content.integration_targets_path) ansible_config = os.path.join(temp_dir, ansible_config_relative) vars_file_src = os.path.join(data_context().content.root, data_context().content.integration_vars_path) vars_file = os.path.join(temp_dir, data_context().content.integration_vars_path) file_copies = [ (ansible_config_src, ansible_config), (inventory_path_src, inventory_path), ] if os.path.exists(vars_file_src): file_copies.append((vars_file_src, vars_file)) file_copies += [(path, os.path.join(temp_dir, path)) for path in files_needed] integration_targets_relative_path = data_context().content.integration_targets_path directory_copies = [ ( os.path.join(integration_targets_relative_path, target.relative_path), os.path.join(temp_dir, integration_targets_relative_path, target.relative_path) ) for target in target_dependencies ] directory_copies = sorted(set(directory_copies)) file_copies = sorted(set(file_copies)) if not args.explain: make_dirs(integration_dir) for dir_src, dir_dst in directory_copies: display.info('Copying %s/ to %s/' % (dir_src, dir_dst), verbosity=2) if not args.explain: shutil.copytree(to_bytes(dir_src), to_bytes(dir_dst), symlinks=True) # type: ignore[arg-type] # incorrect type stub omits bytes path support for file_src, file_dst in file_copies: display.info('Copying %s to %s' % (file_src, file_dst), verbosity=2) if not args.explain: make_dirs(os.path.dirname(file_dst)) shutil.copy2(file_src, file_dst) yield IntegrationEnvironment(temp_dir, integration_dir, targets_dir, inventory_path, ansible_config, vars_file) finally: if not args.explain: remove_tree(temp_dir) @contextlib.contextmanager def integration_test_config_file( args, # type: IntegrationConfig env_config, # type: CloudEnvironmentConfig integration_dir, # type: str ): # type: (...) -> t.Iterator[t.Optional[str]] """Context manager that provides a config file for integration tests, if needed.""" if not env_config: yield None return config_vars = (env_config.ansible_vars or {}).copy() config_vars.update(dict( ansible_test=dict( environment=env_config.env_vars, module_defaults=env_config.module_defaults, ) )) config_file = json.dumps(config_vars, indent=4, sort_keys=True) with named_temporary_file(args, 'config-file-', '.json', integration_dir, config_file) as path: # type: str filename = os.path.relpath(path, integration_dir) display.info('>>> Config File: %s\n%s' % (filename, config_file), verbosity=3) yield path def create_inventory( args, # type: IntegrationConfig host_state, # type: HostState inventory_path, # type: str target, # type: IntegrationTarget ): # type: (...) -> None """Create inventory.""" if isinstance(args, PosixIntegrationConfig): if target.target_type == IntegrationTargetType.CONTROLLER: display.info('Configuring controller inventory.', verbosity=1) create_controller_inventory(args, inventory_path, host_state.controller_profile) elif target.target_type == IntegrationTargetType.TARGET: display.info('Configuring target inventory.', verbosity=1) create_posix_inventory(args, inventory_path, host_state.target_profiles, 'needs/ssh/' in target.aliases) else: raise Exception(f'Unhandled test type for target "{target.name}": {target.target_type.name.lower()}') elif isinstance(args, WindowsIntegrationConfig): display.info('Configuring target inventory.', verbosity=1) target_profiles = filter_profiles_for_target(args, host_state.target_profiles, target) create_windows_inventory(args, inventory_path, target_profiles) elif isinstance(args, NetworkIntegrationConfig): display.info('Configuring target inventory.', verbosity=1) target_profiles = filter_profiles_for_target(args, host_state.target_profiles, target) create_network_inventory(args, inventory_path, target_profiles) def command_integration_filtered( args, # type: IntegrationConfig host_state, # type: HostState targets, # type: t.Tuple[IntegrationTarget, ...] all_targets, # type: t.Tuple[IntegrationTarget, ...] inventory_path, # type: str pre_target=None, # type: t.Optional[t.Callable[[IntegrationTarget], None]] post_target=None, # type: t.Optional[t.Callable[[IntegrationTarget], None]] ): """Run integration tests for the specified targets.""" found = False passed = [] failed = [] targets_iter = iter(targets) all_targets_dict = dict((target.name, target) for target in all_targets) setup_errors = [] setup_targets_executed = set() # type: t.Set[str] for target in all_targets: for setup_target in target.setup_once + target.setup_always: if setup_target not in all_targets_dict: setup_errors.append('Target "%s" contains invalid setup target: %s' % (target.name, setup_target)) if setup_errors: raise ApplicationError('Found %d invalid setup aliases:\n%s' % (len(setup_errors), '\n'.join(setup_errors))) check_pyyaml(host_state.controller_profile.python) test_dir = os.path.join(ResultType.TMP.path, 'output_dir') if not args.explain and any('needs/ssh/' in target.aliases for target in targets): max_tries = 20 display.info('SSH connection to controller required by tests. Checking the connection.') for i in range(1, max_tries + 1): try: run_command(args, ['ssh', '-o', 'BatchMode=yes', 'localhost', 'id'], capture=True) display.info('SSH service responded.') break except SubprocessError: if i == max_tries: raise seconds = 3 display.warning('SSH service not responding. Waiting %d second(s) before checking again.' % seconds) time.sleep(seconds) start_at_task = args.start_at_task results = {} target_profile = host_state.target_profiles[0] if isinstance(target_profile, PosixProfile): target_python = target_profile.python if isinstance(target_profile, ControllerProfile): if host_state.controller_profile.python.path != target_profile.python.path: install_requirements(args, target_python, command=True, controller=False) # integration elif isinstance(target_profile, SshTargetHostProfile): connection = target_profile.get_controller_target_connections()[0] install_requirements(args, target_python, command=True, controller=False, connection=connection) # integration coverage_manager = CoverageManager(args, host_state, inventory_path) coverage_manager.setup() try: for target in targets_iter: if args.start_at and not found: found = target.name == args.start_at if not found: continue create_inventory(args, host_state, inventory_path, target) tries = 2 if args.retry_on_error else 1 verbosity = args.verbosity cloud_environment = get_cloud_environment(args, target) try: while tries: tries -= 1 try: if cloud_environment: cloud_environment.setup_once() run_setup_targets(args, host_state, test_dir, target.setup_once, all_targets_dict, setup_targets_executed, inventory_path, coverage_manager, False) start_time = time.time() if pre_target: pre_target(target) run_setup_targets(args, host_state, test_dir, target.setup_always, all_targets_dict, setup_targets_executed, inventory_path, coverage_manager, True) if not args.explain: # create a fresh test directory for each test target remove_tree(test_dir) make_dirs(test_dir) try: if target.script_path: command_integration_script(args, host_state, target, test_dir, inventory_path, coverage_manager) else: command_integration_role(args, host_state, target, start_at_task, test_dir, inventory_path, coverage_manager) start_at_task = None finally: if post_target: post_target(target) end_time = time.time() results[target.name] = dict( name=target.name, type=target.type, aliases=target.aliases, modules=target.modules, run_time_seconds=int(end_time - start_time), setup_once=target.setup_once, setup_always=target.setup_always, ) break except SubprocessError: if cloud_environment: cloud_environment.on_failure(target, tries) if not tries: raise display.warning('Retrying test target "%s" with maximum verbosity.' % target.name) display.verbosity = args.verbosity = 6 passed.append(target) except Exception as ex: failed.append(target) if args.continue_on_error: display.error(str(ex)) continue display.notice('To resume at this test target, use the option: --start-at %s' % target.name) next_target = next(targets_iter, None) if next_target: display.notice('To resume after this test target, use the option: --start-at %s' % next_target.name) raise finally: display.verbosity = args.verbosity = verbosity finally: if not args.explain: coverage_manager.teardown() result_name = '%s-%s.json' % ( args.command, re.sub(r'[^0-9]', '-', str(datetime.datetime.utcnow().replace(microsecond=0)))) data = dict( targets=results, ) write_json_test_results(ResultType.DATA, result_name, data) if failed: raise ApplicationError('The %d integration test(s) listed below (out of %d) failed. See error output above for details:\n%s' % ( len(failed), len(passed) + len(failed), '\n'.join(target.name for target in failed))) def command_integration_script( args, # type: IntegrationConfig host_state, # type: HostState target, # type: IntegrationTarget test_dir, # type: str inventory_path, # type: str coverage_manager, # type: CoverageManager ): """Run an integration test script.""" display.info('Running %s integration test script' % target.name) env_config = None if isinstance(args, PosixIntegrationConfig): cloud_environment = get_cloud_environment(args, target) if cloud_environment: env_config = cloud_environment.get_environment_config() if env_config: display.info('>>> Environment Config\n%s' % json.dumps(dict( env_vars=env_config.env_vars, ansible_vars=env_config.ansible_vars, callback_plugins=env_config.callback_plugins, module_defaults=env_config.module_defaults, ), indent=4, sort_keys=True), verbosity=3) with integration_test_environment(args, target, inventory_path) as test_env: # type: IntegrationEnvironment cmd = ['./%s' % os.path.basename(target.script_path)] if args.verbosity: cmd.append('-' + ('v' * args.verbosity)) env = integration_environment(args, target, test_dir, test_env.inventory_path, test_env.ansible_config, env_config, test_env) cwd = os.path.join(test_env.targets_dir, target.relative_path) env.update(dict( # support use of adhoc ansible commands in collections without specifying the fully qualified collection name ANSIBLE_PLAYBOOK_DIR=cwd, )) if env_config and env_config.env_vars: env.update(env_config.env_vars) with integration_test_config_file(args, env_config, test_env.integration_dir) as config_path: # type: t.Optional[str] if config_path: cmd += ['-e', '@%s' % config_path] env.update(coverage_manager.get_environment(target.name, target.aliases)) cover_python(args, host_state.controller_profile.python, cmd, target.name, env, cwd=cwd) def command_integration_role( args, # type: IntegrationConfig host_state, # type: HostState target, # type: IntegrationTarget start_at_task, # type: t.Optional[str] test_dir, # type: str inventory_path, # type: str coverage_manager, # type: CoverageManager ): """Run an integration test role.""" display.info('Running %s integration test role' % target.name) env_config = None vars_files = [] variables = dict( output_dir=test_dir, ) if isinstance(args, WindowsIntegrationConfig): hosts = 'windows' gather_facts = False variables.update(dict( win_output_dir=r'C:\ansible_testing', )) elif isinstance(args, NetworkIntegrationConfig): hosts = target.network_platform gather_facts = False else: hosts = 'testhost' gather_facts = True if 'gather_facts/yes/' in target.aliases: gather_facts = True elif 'gather_facts/no/' in target.aliases: gather_facts = False if not isinstance(args, NetworkIntegrationConfig): cloud_environment = get_cloud_environment(args, target) if cloud_environment: env_config = cloud_environment.get_environment_config() if env_config: display.info('>>> Environment Config\n%s' % json.dumps(dict( env_vars=env_config.env_vars, ansible_vars=env_config.ansible_vars, callback_plugins=env_config.callback_plugins, module_defaults=env_config.module_defaults, ), indent=4, sort_keys=True), verbosity=3) with integration_test_environment(args, target, inventory_path) as test_env: # type: IntegrationEnvironment if os.path.exists(test_env.vars_file): vars_files.append(os.path.relpath(test_env.vars_file, test_env.integration_dir)) play = dict( hosts=hosts, gather_facts=gather_facts, vars_files=vars_files, vars=variables, roles=[ target.name, ], ) if env_config: if env_config.ansible_vars: variables.update(env_config.ansible_vars) play.update(dict( environment=env_config.env_vars, module_defaults=env_config.module_defaults, )) playbook = json.dumps([play], indent=4, sort_keys=True) with named_temporary_file(args=args, directory=test_env.integration_dir, prefix='%s-' % target.name, suffix='.yml', content=playbook) as playbook_path: filename = os.path.basename(playbook_path) display.info('>>> Playbook: %s\n%s' % (filename, playbook.strip()), verbosity=3) cmd = ['ansible-playbook', filename, '-i', os.path.relpath(test_env.inventory_path, test_env.integration_dir)] if start_at_task: cmd += ['--start-at-task', start_at_task] if args.tags: cmd += ['--tags', args.tags] if args.skip_tags: cmd += ['--skip-tags', args.skip_tags] if args.diff: cmd += ['--diff'] if isinstance(args, NetworkIntegrationConfig): if args.testcase: cmd += ['-e', 'testcase=%s' % args.testcase] if args.verbosity: cmd.append('-' + ('v' * args.verbosity)) env = integration_environment(args, target, test_dir, test_env.inventory_path, test_env.ansible_config, env_config, test_env) cwd = test_env.integration_dir env.update(dict( # support use of adhoc ansible commands in collections without specifying the fully qualified collection name ANSIBLE_PLAYBOOK_DIR=cwd, )) if env_config and env_config.env_vars: env.update(env_config.env_vars) env['ANSIBLE_ROLES_PATH'] = test_env.targets_dir env.update(coverage_manager.get_environment(target.name, target.aliases)) cover_python(args, host_state.controller_profile.python, cmd, target.name, env, cwd=cwd) def run_setup_targets( args, # type: IntegrationConfig host_state, # type: HostState test_dir, # type: str target_names, # type: t.Sequence[str] targets_dict, # type: t.Dict[str, IntegrationTarget] targets_executed, # type: t.Set[str] inventory_path, # type: str coverage_manager, # type: CoverageManager always, # type: bool ): """Run setup targets.""" for target_name in target_names: if not always and target_name in targets_executed: continue target = targets_dict[target_name] if not args.explain: # create a fresh test directory for each test target remove_tree(test_dir) make_dirs(test_dir) if target.script_path: command_integration_script(args, host_state, target, test_dir, inventory_path, coverage_manager) else: command_integration_role(args, host_state, target, None, test_dir, inventory_path, coverage_manager) targets_executed.add(target_name) def integration_environment( args, # type: IntegrationConfig target, # type: IntegrationTarget test_dir, # type: str inventory_path, # type: str ansible_config, # type: t.Optional[str] env_config, # type: t.Optional[CloudEnvironmentConfig] test_env, # type: IntegrationEnvironment ): # type: (...) -> t.Dict[str, str] """Return a dictionary of environment variables to use when running the given integration test target.""" env = ansible_environment(args, ansible_config=ansible_config) callback_plugins = ['junit'] + (env_config.callback_plugins or [] if env_config else []) integration = dict( JUNIT_OUTPUT_DIR=ResultType.JUNIT.path, JUNIT_TASK_RELATIVE_PATH=test_env.test_dir, JUNIT_REPLACE_OUT_OF_TREE_PATH='out-of-tree:', ANSIBLE_CALLBACKS_ENABLED=','.join(sorted(set(callback_plugins))), ANSIBLE_TEST_CI=args.metadata.ci_provider or get_ci_provider().code, ANSIBLE_TEST_COVERAGE='check' if args.coverage_check else ('yes' if args.coverage else ''), OUTPUT_DIR=test_dir, INVENTORY_PATH=os.path.abspath(inventory_path), ) if args.debug_strategy: env.update(dict(ANSIBLE_STRATEGY='debug')) if 'non_local/' in target.aliases: if args.coverage: display.warning('Skipping coverage reporting on Ansible modules for non-local test: %s' % target.name) env.update(dict(ANSIBLE_TEST_REMOTE_INTERPRETER='')) env.update(integration) return env class IntegrationEnvironment: """Details about the integration environment.""" def __init__(self, test_dir, integration_dir, targets_dir, inventory_path, ansible_config, vars_file): self.test_dir = test_dir self.integration_dir = integration_dir self.targets_dir = targets_dir self.inventory_path = inventory_path self.ansible_config = ansible_config self.vars_file = vars_file class IntegrationCache(CommonCache): """Integration cache.""" @property def integration_targets(self): """ :rtype: list[IntegrationTarget] """ return self.get('integration_targets', lambda: list(walk_integration_targets())) @property def dependency_map(self): """ :rtype: dict[str, set[IntegrationTarget]] """ return self.get('dependency_map', lambda: generate_dependency_map(self.integration_targets)) def filter_profiles_for_target(args, profiles, target): # type: (IntegrationConfig, t.List[THostProfile], IntegrationTarget) -> t.List[THostProfile] """Return a list of profiles after applying target filters.""" if target.target_type == IntegrationTargetType.CONTROLLER: profile_filter = get_target_filter(args, [args.controller], True) elif target.target_type == IntegrationTargetType.TARGET: profile_filter = get_target_filter(args, args.targets, False) else: raise Exception(f'Unhandled test type for target "{target.name}": {target.target_type.name.lower()}') profiles = profile_filter.filter_profiles(profiles, target) return profiles def get_integration_filter(args, targets): # type: (IntegrationConfig, t.List[IntegrationTarget]) -> t.Set[str] """Return a list of test targets to skip based on the host(s) that will be used to run the specified test targets.""" invalid_targets = sorted(target.name for target in targets if target.target_type not in (IntegrationTargetType.CONTROLLER, IntegrationTargetType.TARGET)) if invalid_targets and not args.list_targets: message = f'''Unable to determine context for the following test targets: {", ".join(invalid_targets)} Make sure the test targets are correctly named: - Modules - The target name should match the module name. - Plugins - The target name should be "{{plugin_type}}_{{plugin_name}}". If necessary, context can be controlled by adding entries to the "aliases" file for a test target: - Add the name(s) of modules which are tested. - Add "context/target" for module and module_utils tests (these will run on the target host). - Add "context/controller" for other test types (these will run on the controller).''' raise ApplicationError(message) invalid_targets = sorted(target.name for target in targets if target.actual_type not in (IntegrationTargetType.CONTROLLER, IntegrationTargetType.TARGET)) if invalid_targets: if data_context().content.is_ansible: display.warning(f'Unable to determine context for the following test targets: {", ".join(invalid_targets)}') else: display.warning(f'Unable to determine context for the following test targets, they will be run on the target host: {", ".join(invalid_targets)}') exclude = set() # type: t.Set[str] controller_targets = [target for target in targets if target.target_type == IntegrationTargetType.CONTROLLER] target_targets = [target for target in targets if target.target_type == IntegrationTargetType.TARGET] controller_filter = get_target_filter(args, [args.controller], True) target_filter = get_target_filter(args, args.targets, False) controller_filter.filter_targets(controller_targets, exclude) target_filter.filter_targets(target_targets, exclude) return exclude def command_integration_filter(args, # type: TIntegrationConfig targets, # type: t.Iterable[TIntegrationTarget] ): # type: (...) -> t.Tuple[HostState, t.Tuple[TIntegrationTarget, ...]] """Filter the given integration test targets.""" targets = tuple(target for target in targets if 'hidden/' not in target.aliases) changes = get_changes_filter(args) # special behavior when the --changed-all-target target is selected based on changes if args.changed_all_target in changes: # act as though the --changed-all-target target was in the include list if args.changed_all_mode == 'include' and args.changed_all_target not in args.include: args.include.append(args.changed_all_target) args.delegate_args += ['--include', args.changed_all_target] # act as though the --changed-all-target target was in the exclude list elif args.changed_all_mode == 'exclude' and args.changed_all_target not in args.exclude: args.exclude.append(args.changed_all_target) require = args.require + changes exclude = args.exclude internal_targets = walk_internal_targets(targets, args.include, exclude, require) environment_exclude = get_integration_filter(args, list(internal_targets)) environment_exclude |= set(cloud_filter(args, internal_targets)) if environment_exclude: exclude = sorted(set(exclude) | environment_exclude) internal_targets = walk_internal_targets(targets, args.include, exclude, require) if not internal_targets: raise AllTargetsSkipped() if args.start_at and not any(target.name == args.start_at for target in internal_targets): raise ApplicationError('Start at target matches nothing: %s' % args.start_at) cloud_init(args, internal_targets) vars_file_src = os.path.join(data_context().content.root, data_context().content.integration_vars_path) if os.path.exists(vars_file_src): def integration_config_callback(files): # type: (t.List[t.Tuple[str, str]]) -> None """ Add the integration config vars file to the payload file list. This will preserve the file during delegation even if the file is ignored by source control. """ files.append((vars_file_src, data_context().content.integration_vars_path)) data_context().register_payload_callback(integration_config_callback) if args.list_targets: raise ListTargets([target.name for target in internal_targets]) # requirements are installed using a callback since the windows-integration and network-integration host status checks depend on them host_state = prepare_profiles(args, targets_use_pypi=True, requirements=requirements) # integration, windows-integration, network-integration if args.delegate: raise Delegate(host_state=host_state, require=require, exclude=exclude) return host_state, internal_targets def requirements(args, host_state): # type: (IntegrationConfig, HostState) -> None """Install requirements.""" target_profile = host_state.target_profiles[0] configure_pypi_proxy(args, host_state.controller_profile) # integration, windows-integration, network-integration if isinstance(target_profile, PosixProfile) and not isinstance(target_profile, ControllerProfile): configure_pypi_proxy(args, target_profile) # integration install_requirements(args, host_state.controller_profile.python, ansible=True, command=True) # integration, windows-integration, network-integration
closed
ansible/ansible
https://github.com/ansible/ansible
76,464
ansible-test: development environment no longer works
### Summary I am developing modules using the latest stable versions of Ansible on either Python 3.8 or 3.9. Both versions are giving errors. ### Issue Type Bug Report ### Component Name ansible ### Ansible Version ```console ansible [core 2.12.0] config file = /Users/kbreit/Documents/Programming/ansible_collections/cisco/meraki/ansible.cfg configured module search path = ['/Users/kbreit/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/kbreit/.pyenv/versions/3.9.7/envs/ansible-development-latest/lib/python3.9/site-packages/ansible ansible collection location = /Users/kbreit/.ansible/collections:/usr/share/ansible/collections executable location = /Users/kbreit/.pyenv/versions/ansible-development-latest/bin/ansible python version = 3.9.7 (default, Nov 7 2021, 06:44:13) [Clang 12.0.5 (clang-1205.0.22.9)] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /Users/kbreit/pass.txt GALAXY_SERVER_LIST(/Users/kbreit/Documents/Programming/ansible_collections/cisco/meraki/ansible.cfg) = ['automation_hub'] ``` ### OS / Environment macOS and Ubuntu 21.04 ### Steps to Reproduce Reproducing it largely depends on ansible-test. However... <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) (ansible-development-latest) meraki [master●●] % ansible-test network-integration --allow-unsupported --python 3.9 --docker default meraki_network ``` ### Expected Results I expect the `meraki_network` module (or whatever module I'm testing) to execute in full. ### Actual Results ```console Starting new "ansible-test-controller-pVmIMX5s" container. Adding "ansible-test-controller-pVmIMX5s" to container database. NOTICE: Sourcing inventory file "tests/integration/inventory.networking" from "/Users/kbreit/Documents/Programming/ansible_collections/cisco/meraki/tests/integration/inventory.networking". Traceback (most recent call last): File "/root/ansible/bin/ansible-test", line 42, in <module> main() File "/root/ansible/bin/ansible-test", line 33, in main cli_main() File "/root/ansible/test/lib/ansible_test/_internal/__init__.py", line 70, in main args.func(config) File "/root/ansible/test/lib/ansible_test/_internal/commands/integration/network.py", line 73, in command_network_integration command_integration_filtered(args, host_state, internal_targets, all_targets, inventory_path) File "/root/ansible/test/lib/ansible_test/_internal/commands/integration/__init__.py", line 458, in command_integration_filtered create_inventory(args, host_state, inventory_path, target) File "/root/ansible/test/lib/ansible_test/_internal/commands/integration/__init__.py", line 382, in create_inventory create_network_inventory(args, inventory_path, target_profiles) File "/root/ansible/test/lib/ansible_test/_internal/inventory.py", line 90, in create_network_inventory shutil.copyfile(first.config.path, path) File "/usr/lib/python3.9/shutil.py", line 243, in copyfile if _samefile(src, dst): File "/usr/lib/python3.9/shutil.py", line 220, in _samefile return os.path.samefile(src, dst) File "/usr/lib/python3.9/genericpath.py", line 100, in samefile s1 = os.stat(f1) TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType ERROR: Command "docker exec -it ansible-test-controller-pVmIMX5s /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible_collections/cisco/meraki LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test network-integration --containers '{}' --allow-unsupported meraki_network --metadata tests/output/.tmp/metadata-os7__ecv.json --truncate 137 --color yes --host-path tests/output/.tmp/host-_fjd92yd" returned exit status 1. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76464
https://github.com/ansible/ansible/pull/77255
b60a5eefb2bb93051b8b939889eb3a26e3319c7c
e8afdac06e3cdbb52885ec15660ea265e62d63ab
2021-12-04T02:09:37Z
python
2022-03-11T02:17:49Z
test/lib/ansible_test/_internal/commands/integration/network.py
"""Network integration testing.""" from __future__ import annotations import os from ...util import ( ApplicationError, ANSIBLE_TEST_CONFIG_ROOT, ) from ...util_common import ( handle_layout_messages, ) from ...target import ( walk_network_integration_targets, ) from ...config import ( NetworkIntegrationConfig, ) from . import ( command_integration_filter, command_integration_filtered, get_inventory_relative_path, check_inventory, delegate_inventory, ) from ...data import ( data_context, ) from ...host_configs import ( NetworkInventoryConfig, NetworkRemoteConfig, ) def command_network_integration(args): # type: (NetworkIntegrationConfig) -> None """Entry point for the `network-integration` command.""" handle_layout_messages(data_context().content.integration_messages) inventory_relative_path = get_inventory_relative_path(args) template_path = os.path.join(ANSIBLE_TEST_CONFIG_ROOT, os.path.basename(inventory_relative_path)) + '.template' if issubclass(args.target_type, NetworkInventoryConfig): inventory_path = os.path.join(data_context().content.root, data_context().content.integration_path, args.only_target(NetworkInventoryConfig).path or os.path.basename(inventory_relative_path)) else: inventory_path = os.path.join(data_context().content.root, inventory_relative_path) if args.no_temp_workdir: # temporary solution to keep DCI tests working inventory_exists = os.path.exists(inventory_path) else: inventory_exists = os.path.isfile(inventory_path) if not args.explain and not issubclass(args.target_type, NetworkRemoteConfig) and not inventory_exists: raise ApplicationError( 'Inventory not found: %s\n' 'Use --inventory to specify the inventory path.\n' 'Use --platform to provision resources and generate an inventory file.\n' 'See also inventory template: %s' % (inventory_path, template_path) ) check_inventory(args, inventory_path) delegate_inventory(args, inventory_path) all_targets = tuple(walk_network_integration_targets(include_hidden=True)) host_state, internal_targets = command_integration_filter(args, all_targets) command_integration_filtered(args, host_state, internal_targets, all_targets, inventory_path)
closed
ansible/ansible
https://github.com/ansible/ansible
76,464
ansible-test: development environment no longer works
### Summary I am developing modules using the latest stable versions of Ansible on either Python 3.8 or 3.9. Both versions are giving errors. ### Issue Type Bug Report ### Component Name ansible ### Ansible Version ```console ansible [core 2.12.0] config file = /Users/kbreit/Documents/Programming/ansible_collections/cisco/meraki/ansible.cfg configured module search path = ['/Users/kbreit/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/kbreit/.pyenv/versions/3.9.7/envs/ansible-development-latest/lib/python3.9/site-packages/ansible ansible collection location = /Users/kbreit/.ansible/collections:/usr/share/ansible/collections executable location = /Users/kbreit/.pyenv/versions/ansible-development-latest/bin/ansible python version = 3.9.7 (default, Nov 7 2021, 06:44:13) [Clang 12.0.5 (clang-1205.0.22.9)] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /Users/kbreit/pass.txt GALAXY_SERVER_LIST(/Users/kbreit/Documents/Programming/ansible_collections/cisco/meraki/ansible.cfg) = ['automation_hub'] ``` ### OS / Environment macOS and Ubuntu 21.04 ### Steps to Reproduce Reproducing it largely depends on ansible-test. However... <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) (ansible-development-latest) meraki [master●●] % ansible-test network-integration --allow-unsupported --python 3.9 --docker default meraki_network ``` ### Expected Results I expect the `meraki_network` module (or whatever module I'm testing) to execute in full. ### Actual Results ```console Starting new "ansible-test-controller-pVmIMX5s" container. Adding "ansible-test-controller-pVmIMX5s" to container database. NOTICE: Sourcing inventory file "tests/integration/inventory.networking" from "/Users/kbreit/Documents/Programming/ansible_collections/cisco/meraki/tests/integration/inventory.networking". Traceback (most recent call last): File "/root/ansible/bin/ansible-test", line 42, in <module> main() File "/root/ansible/bin/ansible-test", line 33, in main cli_main() File "/root/ansible/test/lib/ansible_test/_internal/__init__.py", line 70, in main args.func(config) File "/root/ansible/test/lib/ansible_test/_internal/commands/integration/network.py", line 73, in command_network_integration command_integration_filtered(args, host_state, internal_targets, all_targets, inventory_path) File "/root/ansible/test/lib/ansible_test/_internal/commands/integration/__init__.py", line 458, in command_integration_filtered create_inventory(args, host_state, inventory_path, target) File "/root/ansible/test/lib/ansible_test/_internal/commands/integration/__init__.py", line 382, in create_inventory create_network_inventory(args, inventory_path, target_profiles) File "/root/ansible/test/lib/ansible_test/_internal/inventory.py", line 90, in create_network_inventory shutil.copyfile(first.config.path, path) File "/usr/lib/python3.9/shutil.py", line 243, in copyfile if _samefile(src, dst): File "/usr/lib/python3.9/shutil.py", line 220, in _samefile return os.path.samefile(src, dst) File "/usr/lib/python3.9/genericpath.py", line 100, in samefile s1 = os.stat(f1) TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType ERROR: Command "docker exec -it ansible-test-controller-pVmIMX5s /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible_collections/cisco/meraki LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test network-integration --containers '{}' --allow-unsupported meraki_network --metadata tests/output/.tmp/metadata-os7__ecv.json --truncate 137 --color yes --host-path tests/output/.tmp/host-_fjd92yd" returned exit status 1. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/76464
https://github.com/ansible/ansible/pull/77255
b60a5eefb2bb93051b8b939889eb3a26e3319c7c
e8afdac06e3cdbb52885ec15660ea265e62d63ab
2021-12-04T02:09:37Z
python
2022-03-11T02:17:49Z
test/lib/ansible_test/_internal/commands/integration/windows.py
"""Windows integration testing.""" from __future__ import annotations import os from ...util import ( ApplicationError, ANSIBLE_TEST_CONFIG_ROOT, ) from ...util_common import ( handle_layout_messages, ) from ...containers import ( create_container_hooks, local_ssh, root_ssh, ) from ...target import ( walk_windows_integration_targets, ) from ...config import ( WindowsIntegrationConfig, ) from ...host_configs import ( WindowsInventoryConfig, WindowsRemoteConfig, ) from . import ( command_integration_filter, command_integration_filtered, get_inventory_relative_path, check_inventory, delegate_inventory, ) from ...data import ( data_context, ) def command_windows_integration(args): # type: (WindowsIntegrationConfig) -> None """Entry point for the `windows-integration` command.""" handle_layout_messages(data_context().content.integration_messages) inventory_relative_path = get_inventory_relative_path(args) template_path = os.path.join(ANSIBLE_TEST_CONFIG_ROOT, os.path.basename(inventory_relative_path)) + '.template' if issubclass(args.target_type, WindowsInventoryConfig): inventory_path = os.path.join(data_context().content.root, data_context().content.integration_path, args.only_target(WindowsInventoryConfig).path or os.path.basename(inventory_relative_path)) else: inventory_path = os.path.join(data_context().content.root, inventory_relative_path) if not args.explain and not issubclass(args.target_type, WindowsRemoteConfig) and not os.path.isfile(inventory_path): raise ApplicationError( 'Inventory not found: %s\n' 'Use --inventory to specify the inventory path.\n' 'Use --windows to provision resources and generate an inventory file.\n' 'See also inventory template: %s' % (inventory_path, template_path) ) check_inventory(args, inventory_path) delegate_inventory(args, inventory_path) all_targets = tuple(walk_windows_integration_targets(include_hidden=True)) host_state, internal_targets = command_integration_filter(args, all_targets) control_connections = [local_ssh(args, host_state.controller_profile.python)] managed_connections = [root_ssh(ssh) for ssh in host_state.get_controller_target_connections()] pre_target, post_target = create_container_hooks(args, control_connections, managed_connections) command_integration_filtered(args, host_state, internal_targets, all_targets, inventory_path, pre_target=pre_target, post_target=post_target)
closed
ansible/ansible
https://github.com/ansible/ansible
77,221
Wrong collection listed for ipaddr filter
### Summary https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters_ipaddr.html https://github.com/ansible/ansible/blob/devel/docs/docsite/rst/user_guide/playbooks_filters_ipaddr.rst ipaddr is migrated to ansible.utils from ansible.netcommon ``` [DEPRECATION WARNING]: Use 'ansible.utils.ipaddr' module instead. This feature will be removed from ansible.netcommon in a release after 2024-01-01. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. ``` ### Issue Type Documentation Report ### Component Name docs/docsite/rst/user_guide/playbooks_filters_ipaddr.rst ### Ansible Version ```console $ ansible --version ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment . ### Additional Information . ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77221
https://github.com/ansible/ansible/pull/77281
6546c484f4ece685b339423034c0cd6e18cdcba6
dfda04894f25a9f491b5bde7ae051da01d12d654
2022-03-07T16:33:08Z
python
2022-03-14T20:27:06Z
docs/docsite/rst/user_guide/playbooks_filters_ipaddr.rst
:orphan: .. _playbooks_filters_ipaddr: ipaddr filter ````````````` .. versionadded:: 1.9 ``ipaddr()`` is a Jinja2 filter designed to provide an interface to the `netaddr`_ Python package from within Ansible. It can operate on strings or lists of items, test various data to check if they are valid IP addresses, and manipulate the input data to extract requested information. ``ipaddr()`` works with both IPv4 and IPv6 addresses in various forms. There are also additional functions available to manipulate IP subnets and MAC addresses. .. note:: The ``ipaddr()`` filter migrated to the `ansible.netcommon <https://galaxy.ansible.com/ansible/netcommon>`_ collection. Follow the installation instructions to install that collection. To use this filter in Ansible, you need to install the `netaddr`_ Python library on a computer on which you use Ansible (it is not required on remote hosts). It can usually be installed with either your system package manager or using ``pip``: .. code-block:: bash pip install netaddr .. _netaddr: https://pypi.org/project/netaddr/ .. contents:: Topics :local: :depth: 2 :backlinks: top Basic tests ^^^^^^^^^^^ ``ipaddr()`` is designed to return the input value if a query is True, and ``False`` if a query is False. This way it can be easily used in chained filters. To use the filter, pass a string to it: .. code-block:: none {{ '192.0.2.0' | ansible.netcommon.ipaddr }} You can also pass the values as variables: .. code-block:: yaml+jinja {{ myvar | ansible.netcommon.ipaddr }} Here are some example test results of various input strings: .. code-block:: none # These values are valid IP addresses or network ranges '192.168.0.1' -> 192.168.0.1 '192.168.32.0/24' -> 192.168.32.0/24 'fe80::100/10' -> fe80::100/10 45443646733 -> ::a:94a7:50d '523454/24' -> 0.7.252.190/24 # Values that are not valid IP addresses or network ranges 'localhost' -> False True -> False 'space bar' -> False False -> False '' -> False ':' -> False 'fe80:/10' -> False Sometimes you need either IPv4 or IPv6 addresses. To filter only for a particular type, ``ipaddr()`` filter has two "aliases", ``ipv4()`` and ``ipv6()``. Example use of an IPv4 filter: .. code-block:: yaml+jinja {{ myvar | ansible.netcommon.ipv4 }} A similar example of an IPv6 filter: .. code-block:: yaml+jinja {{ myvar | ansible.netcommon.ipv6 }} Here's some example test results to look for IPv4 addresses: .. code-block:: none '192.168.0.1' -> 192.168.0.1 '192.168.32.0/24' -> 192.168.32.0/24 'fe80::100/10' -> False 45443646733 -> False '523454/24' -> 0.7.252.190/24 And the same data filtered for IPv6 addresses: .. code-block:: none '192.168.0.1' -> False '192.168.32.0/24' -> False 'fe80::100/10' -> fe80::100/10 45443646733 -> ::a:94a7:50d '523454/24' -> False Filtering lists ^^^^^^^^^^^^^^^ You can filter entire lists - ``ipaddr()`` will return a list with values valid for a particular query. .. code-block:: yaml+jinja # Example list of values test_list: ['192.24.2.1', 'host.fqdn', '::1', '192.168.32.0/24', 'fe80::100/10', True, '', '42540766412265424405338506004571095040/64'] # {{ test_list | ansible.netcommon.ipaddr }} ['192.24.2.1', '::1', '192.168.32.0/24', 'fe80::100/10', '2001:db8:32c:faad::/64'] # {{ test_list | ansible.netcommon.ipv4 }} ['192.24.2.1', '192.168.32.0/24'] # {{ test_list | ansible.netcommon.ipv6 }} ['::1', 'fe80::100/10', '2001:db8:32c:faad::/64'] Wrapping IPv6 addresses in [ ] brackets ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Some configuration files require IPv6 addresses to be "wrapped" in square brackets (``[ ]``). To accomplish that, you can use the ``ipwrap()`` filter. It will wrap all IPv6 addresses and leave any other strings intact. .. code-block:: yaml+jinja # {{ test_list | ansible.netcommon.ipwrap }} ['192.24.2.1', 'host.fqdn', '[::1]', '192.168.32.0/24', '[fe80::100]/10', True, '', '[2001:db8:32c:faad::]/64'] As you can see, ``ipwrap()`` did not filter out non-IP address values, which is usually what you want when for example you are mixing IP addresses with hostnames. If you still want to filter out all non-IP address values, you can chain both filters together. .. code-block:: yaml+jinja # {{ test_list | ansible.netcommon.ipaddr | ansible.netcommon.ipwrap }} ['192.24.2.1', '[::1]', '192.168.32.0/24', '[fe80::100]/10', '[2001:db8:32c:faad::]/64'] Basic queries ^^^^^^^^^^^^^ You can provide a single argument to each ``ipaddr()`` filter. The filter will then treat it as a query and return values modified by that query. Lists will contain only values that you are querying for. Types of queries include: - query by name: ``ansible.netcommon.ipaddr('address')``, ``ansible.netcommon.ipv4('network')``; - query by CIDR range: ``ansible.netcommon.ipaddr('192.168.0.0/24')``, ``ansible.netcommon.ipv6('2001:db8::/32')``; - query by index number: ``ansible.netcommon.ipaddr('1')``, ``ansible.netcommon.ipaddr('-1')``; If a query type is not recognized, Ansible will raise an error. Getting information about hosts and networks ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Here's our test list again: .. code-block:: yaml+jinja # Example list of values test_list: ['192.24.2.1', 'host.fqdn', '::1', '192.168.32.0/24', 'fe80::100/10', True, '', '42540766412265424405338506004571095040/64'] Let's take the list above and get only those elements that are host IP addresses and not network ranges: .. code-block:: yaml+jinja # {{ test_list | ansible.netcommon.ipaddr('address') }} ['192.24.2.1', '::1', 'fe80::100'] As you can see, even though some values had a host address with a CIDR prefix, they were dropped by the filter. If you want host IP addresses with their correct CIDR prefixes (as is common with IPv6 addressing), you can use the ``ipaddr('host')`` filter. .. code-block:: yaml+jinja # {{ test_list | ansible.netcommon.ipaddr('host') }} ['192.24.2.1/32', '::1/128', 'fe80::100/10'] Filtering by IP address type also works. .. code-block:: yaml+jinja # {{ test_list | ansible.netcommon.ipv4('address') }} ['192.24.2.1'] # {{ test_list | ansible.netcommon.ipv6('address') }} ['::1', 'fe80::100'] You can check if IP addresses or network ranges are accessible on a public Internet, or if they are in private networks: .. code-block:: yaml+jinja # {{ test_list | ansible.netcommon.ipaddr('public') }} ['192.24.2.1', '2001:db8:32c:faad::/64'] # {{ test_list | ansible.netcommon.ipaddr('private') }} ['192.168.32.0/24', 'fe80::100/10'] You can check which values are specifically network ranges: .. code-block:: yaml+jinja # {{ test_list | ansible.netcommon.ipaddr('net') }} ['192.168.32.0/24', '2001:db8:32c:faad::/64'] You can also check how many IP addresses can be in a certain range. .. code-block:: yaml+jinja # {{ test_list | ansible.netcommon.ipaddr('net') | ansible.netcommon.ipaddr('size') }} [256, 18446744073709551616L] By specifying a network range as a query, you can check if a given value is in that range. .. code-block:: yaml+jinja # {{ test_list | ansible.netcommon.ipaddr('192.0.0.0/8') }} ['192.24.2.1', '192.168.32.0/24'] If you specify a positive or negative integer as a query, ``ipaddr()`` will treat this as an index and will return the specific IP address from a network range, in the 'host/prefix' format. .. code-block:: yaml+jinja # First IP address (network address) # {{ test_list | ansible.netcommon.ipaddr('net') | ansible.netcommon.ipaddr('0') }} ['192.168.32.0/24', '2001:db8:32c:faad::/64'] # Second IP address (usually the gateway host) # {{ test_list | ansible.netcommon.ipaddr('net') | ansible.netcommon.ipaddr('1') }} ['192.168.32.1/24', '2001:db8:32c:faad::1/64'] # Last IP address (the broadcast address in IPv4 networks) # {{ test_list | ansible.netcommon.ipaddr('net') | ansible.netcommon.ipaddr('-1') }} ['192.168.32.255/24', '2001:db8:32c:faad:ffff:ffff:ffff:ffff/64'] You can also select IP addresses from a range by their index, from the start or end of the range. .. code-block:: yaml+jinja # Returns from the start of the range # {{ test_list | ansible.netcommon.ipaddr('net') | ansible.netcommon.ipaddr('200') }} ['192.168.32.200/24', '2001:db8:32c:faad::c8/64'] # Returns from the end of the range # {{ test_list | ansible.netcommon.ipaddr('net') | ansible.netcommon.ipaddr('-200') }} ['192.168.32.56/24', '2001:db8:32c:faad:ffff:ffff:ffff:ff38/64'] # {{ test_list | ansible.netcommon.ipaddr('net') | ansible.netcommon.ipaddr('400') }} ['2001:db8:32c:faad::190/64'] Getting information from host/prefix values ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You frequently use a combination of IP addresses and subnet prefixes ("CIDR"), this is even more common with IPv6. The ``ansible.netcommon.ipaddr()`` filter can extract useful data from these prefixes. Here's an example set of two host prefixes (with some "control" values): .. code-block:: yaml+jinja host_prefix: ['2001:db8:deaf:be11::ef3/64', '192.0.2.48/24', '127.0.0.1', '192.168.0.0/16'] First, let's make sure that we only work with correct host/prefix values, not just subnets or single IP addresses. .. code-block:: yaml+jinja # {{ host_prefix | ansible.netcommon.ipaddr('host/prefix') }} ['2001:db8:deaf:be11::ef3/64', '192.0.2.48/24'] In Debian-based systems, the network configuration stored in the ``/etc/network/interfaces`` file uses a combination of IP address, network address, netmask and broadcast address to configure an IPv4 network interface. We can get these values from a single 'host/prefix' combination: .. code-block:: jinja # Jinja2 template {% set ipv4_host = host_prefix | unique | ansible.netcommon.ipv4('host/prefix') | first %} iface eth0 inet static address {{ ipv4_host | ansible.netcommon.ipaddr('address') }} network {{ ipv4_host | ansible.netcommon.ipaddr('network') }} netmask {{ ipv4_host | ansible.netcommon.ipaddr('netmask') }} broadcast {{ ipv4_host | ansible.netcommon.ipaddr('broadcast') }} # Generated configuration file iface eth0 inet static address 192.0.2.48 network 192.0.2.0 netmask 255.255.255.0 broadcast 192.0.2.255 In the above example, we needed to handle the fact that values were stored in a list, which is unusual in IPv4 networks, where only a single IP address can be set on an interface. However, IPv6 networks can have multiple IP addresses set on an interface: .. code-block:: jinja # Jinja2 template iface eth0 inet6 static {% set ipv6_list = host_prefix | unique | ansible.netcommon.ipv6('host/prefix') %} address {{ ipv6_list[0] }} {% if ipv6_list | length > 1 %} {% for subnet in ipv6_list[1:] %} up /sbin/ip address add {{ subnet }} dev eth0 down /sbin/ip address del {{ subnet }} dev eth0 {% endfor %} {% endif %} # Generated configuration file iface eth0 inet6 static address 2001:db8:deaf:be11::ef3/64 If needed, you can extract subnet and prefix information from the 'host/prefix' value: .. code-block:: jinja # {{ host_prefix | ansible.netcommon.ipaddr('host/prefix') | ansible.netcommon.ipaddr('subnet') }} ['2001:db8:deaf:be11::/64', '192.0.2.0/24'] # {{ host_prefix | ansible.netcommon.ipaddr('host/prefix') | ansible.netcommon.ipaddr('prefix') }} [64, 24] Converting subnet masks to CIDR notation ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Given a subnet in the form of network address and subnet mask, the ``ipaddr()`` filter can convert it into CIDR notation. This can be useful for converting Ansible facts gathered about network configuration from subnet masks into CIDR format. .. code-block:: yaml+jinja ansible_default_ipv4: { address: "192.168.0.11", alias: "eth0", broadcast: "192.168.0.255", gateway: "192.168.0.1", interface: "eth0", macaddress: "fa:16:3e:c4:bd:89", mtu: 1500, netmask: "255.255.255.0", network: "192.168.0.0", type: "ether" } First concatenate the network and netmask: .. code-block:: yaml+jinja net_mask: "{{ ansible_default_ipv4.network }}/{{ ansible_default_ipv4.netmask }}" '192.168.0.0/255.255.255.0' This result can be converted to canonical form with ``ipaddr()`` to produce a subnet in CIDR format. .. code-block:: yaml+jinja # {{ net_mask | ansible.netcommon.ipaddr('prefix') }} '24' # {{ net_mask | ansible.netcommon.ipaddr('net') }} '192.168.0.0/24' Getting information about the network in CIDR notation ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Given an IP address, the ``ipaddr()`` filter can produce the network address in CIDR notation. This can be useful when you want to obtain the network address from the IP address in CIDR format. Here's an example of IP address: .. code-block:: yaml+jinja ip_address: "{{ ansible_default_ipv4.address }}/{{ ansible_default_ipv4.netmask }}" '192.168.0.11/255.255.255.0' This can be used to obtain the network address in CIDR notation format. .. code-block:: yaml+jinja # {{ ip_address | ansible.netcommon.ipaddr('network/prefix') }} '192.168.0.0/24' IP address conversion ^^^^^^^^^^^^^^^^^^^^^ Here's our test list again: .. code-block:: yaml+jinja # Example list of values test_list: ['192.24.2.1', 'host.fqdn', '::1', '192.168.32.0/24', 'fe80::100/10', True, '', '42540766412265424405338506004571095040/64'] You can convert IPv4 addresses into IPv6 addresses. .. code-block:: yaml+jinja # {{ test_list | ansible.netcommon.ipv4('ipv6') }} ['::ffff:192.24.2.1/128', '::ffff:192.168.32.0/120'] Converting from IPv6 to IPv4 works very rarely .. code-block:: yaml+jinja # {{ test_list | ansible.netcommon.ipv6('ipv4') }} ['0.0.0.1/32'] But we can make a double conversion if needed: .. code-block:: yaml+jinja # {{ test_list | ansible.netcommon.ipaddr('ipv6') | ansible.netcommon.ipaddr('ipv4') }} ['192.24.2.1/32', '0.0.0.1/32', '192.168.32.0/24'] You can convert IP addresses to integers, the same way that you can convert integers into IP addresses. .. code-block:: yaml+jinja # {{ test_list | ansible.netcommon.ipaddr('address') | ansible.netcommon.ipaddr('int') }} [3222798849, 1, '3232243712/24', '338288524927261089654018896841347694848/10', '42540766412265424405338506004571095040/64'] You can convert IPv4 address to `Hexadecimal notation <https://en.wikipedia.org/wiki/Hexadecimal>`_ with optional delimiter: .. code-block:: yaml+jinja # {{ '192.168.1.5' | ansible.netcommon.ip4_hex }} c0a80105 # {{ '192.168.1.5' | ansible.netcommon.ip4_hex(':') }} c0:a8:01:05 You can convert IP addresses to PTR records: .. code-block:: yaml+jinja # {% for address in test_list | ansible.netcommon.ipaddr %} # {{ address | ansible.netcommon.ipaddr('revdns') }} # {% endfor %} 1.2.24.192.in-addr.arpa. 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa. 0.32.168.192.in-addr.arpa. 0.0.1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.8.e.f.ip6.arpa. 0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.d.a.a.f.c.2.3.0.8.b.d.0.1.0.0.2.ip6.arpa. Converting IPv4 address to a 6to4 address ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ A `6to4`_ tunnel is a way to access the IPv6 Internet from an IPv4-only network. If you have a public IPv4 address, you can automatically configure its IPv6 equivalent in the ``2002::/16`` network range. After conversion you will gain access to a ``2002:xxxx:xxxx::/48`` subnet which could be split into 65535 ``/64`` subnets if needed. To convert your IPv4 address, just send it through the ``'6to4'`` filter. It will be automatically converted to a router address (with a ``::1/48`` host address). .. code-block:: yaml+jinja # {{ '193.0.2.0' | ansible.netcommon.ipaddr('6to4') }} 2002:c100:0200::1/48 .. _6to4: https://en.wikipedia.org/wiki/6to4 Finding IP addresses within a range ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To find usable IP addresses within an IP range, try these ``ipaddr`` filters: To find the next usable IP address in a range, use ``next_usable``: .. code-block:: yaml+jinja # {{ '192.168.122.1/24' | ansible.netcommon.ipaddr('next_usable') }} 192.168.122.2 To find the last usable IP address from a range, use ``last_usable``. .. code-block:: yaml+jinja # {{ '192.168.122.1/24' | ansible.netcommon.ipaddr('last_usable') }} 192.168.122.254 To find the available range of IP addresses from the given network address, use ``range_usable``. .. code-block:: yaml+jinja # {{ '192.168.122.1/24' | ansible.netcommon.ipaddr('range_usable') }} 192.168.122.1-192.168.122.254 To find the peer IP address for a point to point link, use ``peer``. .. code-block:: yaml+jinja # {{ '192.168.122.1/31' | ansible.netcommon.ipaddr('peer') }} 192.168.122.0 # {{ '192.168.122.1/30' | ansible.netcommon.ipaddr('peer') }} 192.168.122.2 To return the nth ip from a network, use the filter ``nthhost``. .. code-block:: yaml+jinja # {{ '10.0.0.0/8' | ansible.netcommon.nthhost(305) }} 10.0.1.49 ``nthhost`` also supports a negative value. .. code-block:: yaml+jinja # {{ '10.0.0.0/8' | ansible.netcommon.nthhost(-1) }} 10.255.255.255 To find the next nth usable IP address in relation to another within a range, use ``next_nth_usable`` In the example, ``next_nth_usable`` returns the second usable IP address for the given IP range: .. code-block:: yaml+jinja # {{ '192.168.122.1/24' | ansible.netcommon.next_nth_usable(2) }} 192.168.122.3 If there is no usable address, it returns an empty string. .. code-block:: yaml+jinja # {{ '192.168.122.254/24' | ansible.netcommon.next_nth_usable(2) }} "" Just like ``next_nth_ansible``, you have ``previous_nth_usable`` to find the previous usable address: .. code-block:: yaml+jinja # {{ '192.168.122.10/24' | ansible.netcommon.previous_nth_usable(2) }} 192.168.122.8 Testing if a address belong to a network range ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The ``network_in_usable`` filter returns whether an address passed as an argument is usable in a network. Usable addresses are addresses that can be assigned to a host. The network ID and the broadcast address are not usable addresses. .. code-block:: yaml+jinja # {{ '192.168.0.0/24' | ansible.netcommon.network_in_usable( '192.168.0.1' ) }} True # {{ '192.168.0.0/24' | ansible.netcommon.network_in_usable( '192.168.0.255' ) }} False # {{ '192.168.0.0/16' | ansible.netcommon.network_in_usable( '192.168.0.255' ) }} True The ``network_in_network`` filter returns whether an address or a network passed as argument is in a network. .. code-block:: yaml+jinja # {{ '192.168.0.0/24' | ansible.netcommon.network_in_network( '192.168.0.1' ) }} True # {{ '192.168.0.0/24' | ansible.netcommon.network_in_network( '192.168.0.0/24' ) }} True # {{ '192.168.0.0/24' | ansible.netcommon.network_in_network( '192.168.0.255' ) }} True # Check in a network is part of another network # {{ '192.168.0.0/16' | ansible.netcommon.network_in_network( '192.168.0.0/24' ) }} True To check whether multiple addresses belong to a network, use the ``reduce_on_network`` filter. .. code-block:: yaml+jinja # {{ ['192.168.0.34', '10.3.0.3', '192.168.2.34'] | ansible.netcommon.reduce_on_network( '192.168.0.0/24' ) }} ['192.168.0.34'] IP Math ^^^^^^^ .. versionadded:: 2.7 The ``ipmath()`` filter can be used to do simple IP math/arithmetic. Here are a few simple examples: .. code-block:: yaml+jinja # Get the next five addresses based on an IP address # {{ '192.168.1.5' | ansible.netcommon.ipmath(5) }} 192.168.1.10 # Get the ten previous addresses based on an IP address # {{ '192.168.0.5' | ansible.netcommon.ipmath(-10) }} 192.167.255.251 # Get the next five addresses using CIDR notation # {{ '192.168.1.1/24' | ansible.netcommon.ipmath(5) }} 192.168.1.6 # Get the previous five addresses using CIDR notation # {{ '192.168.1.6/24' | ansible.netcommon.ipmath(-5) }} 192.168.1.1 # Get the previous ten address using cidr notation # It returns a address of the previous network range # {{ '192.168.2.6/24' | ansible.netcommon.ipmath(-10) }} 192.168.1.252 # Get the next ten addresses in IPv6 # {{ '2001::1' | ansible.netcommon.ipmath(10) }} 2001::b # Get the previous ten address in IPv6 # {{ '2001::5' | ansible.netcommon.ipmath(-10) }} 2000:ffff:ffff:ffff:ffff:ffff:ffff:fffb Subnet manipulation ^^^^^^^^^^^^^^^^^^^ The ``ipsubnet()`` filter can be used to manipulate network subnets in several ways. Here is an example IP address and subnet: .. code-block:: yaml+jinja address: '192.168.144.5' subnet: '192.168.0.0/16' To check if a given string is a subnet, pass it through the filter without any arguments. If the given string is an IP address, it will be converted into a subnet. .. code-block:: yaml+jinja # {{ address | ansible.netcommon.ipsubnet }} 192.168.144.5/32 # {{ subnet | ansible.netcommon.ipsubnet }} 192.168.0.0/16 If you specify a subnet size as the first parameter of the ``ipsubnet()`` filter, and the subnet size is **smaller than the current one**, you will get the number of subnets a given subnet can be split into. .. code-block:: yaml+jinja # {{ subnet | ansible.netcommon.ipsubnet(20) }} 16 The second argument of the ``ipsubnet()`` filter is an index number; by specifying it you can get a new subnet with the specified size. .. code-block:: yaml+jinja # First subnet # {{ subnet | ansible.netcommon.ipsubnet(20, 0) }} 192.168.0.0/20 # Last subnet # {{ subnet | ansible.netcommon.ipsubnet(20, -1) }} 192.168.240.0/20 # Fifth subnet # {{ subnet | ansible.netcommon.ipsubnet(20, 5) }} 192.168.80.0/20 # Fifth to last subnet # {{ subnet | ansible.netcommon.ipsubnet(20, -5) }} 192.168.176.0/20 If you specify an IP address instead of a subnet, and give a subnet size as the first argument, the ``ipsubnet()`` filter will instead return the biggest subnet that contains that given IP address. .. code-block:: yaml+jinja # {{ address | ansible.netcommon.ipsubnet(20) }} 192.168.144.0/20 By specifying an index number as a second argument, you can select smaller and smaller subnets. .. code-block:: yaml+jinja # First subnet # {{ address | ansible.netcommon.ipsubnet(18, 0) }} 192.168.128.0/18 # Last subnet # {{ address | ansible.netcommon.ipsubnet(18, -1) }} 192.168.144.4/31 # Fifth subnet # {{ address | ansible.netcommon.ipsubnet(18, 5) }} 192.168.144.0/23 # Fifth to last subnet # {{ address | ansible.netcommon.ipsubnet(18, -5) }} 192.168.144.0/27 By specifying another subnet as a second argument, if the second subnet includes the first, you can determine the rank of the first subnet in the second. .. code-block:: yaml+jinja # The rank of the IP in the subnet (the IP is the 36870nth /32 of the subnet) # {{ address | ansible.netcommon.ipsubnet(subnet) }} 36870 # The rank in the /24 that contain the address # {{ address | ansible.netcommon.ipsubnet('192.168.144.0/24') }} 6 # An IP with the subnet in the first /30 in a /24 # {{ '192.168.144.1/30' | ansible.netcommon.ipsubnet('192.168.144.0/24') }} 1 # The fifth subnet /30 in a /24 # {{ '192.168.144.16/30' | ansible.netcommon.ipsubnet('192.168.144.0/24') }} 5 If the second subnet doesn't include the first subnet, the ``ipsubnet()`` filter raises an error. You can use the ``ipsubnet()`` filter with the ``ipaddr()`` filter to, for example, split a given ``/48`` prefix into smaller ``/64`` subnets: .. code-block:: yaml+jinja # {{ '193.0.2.0' | ansible.netcommon.ipaddr('6to4') | ipsubnet(64, 58820) | ansible.netcommon.ipaddr('1') }} 2002:c100:200:e5c4::1/64 Because of the size of IPv6 subnets, iteration over all of them to find the correct one may take some time on slower computers, depending on the size difference between the subnets. Subnet Merging ^^^^^^^^^^^^^^ .. versionadded:: 2.6 The ``cidr_merge()`` filter can be used to merge subnets or individual addresses into their minimal representation, collapsing overlapping subnets and merging adjacent ones wherever possible. .. code-block:: yaml+jinja {{ ['192.168.0.0/17', '192.168.128.0/17', '192.168.128.1' ] | cidr_merge }} # => ['192.168.0.0/16'] {{ ['192.168.0.0/24', '192.168.1.0/24', '192.168.3.0/24'] | cidr_merge }} # => ['192.168.0.0/23', '192.168.3.0/24'] Changing the action from 'merge' to 'span' will instead return the smallest subnet which contains all of the inputs. .. code-block:: yaml+jinja {{ ['192.168.0.0/24', '192.168.3.0/24'] | ansible.netcommon.cidr_merge('span') }} # => '192.168.0.0/22' {{ ['192.168.1.42', '192.168.42.1'] | ansible.netcommon.cidr_merge('span') }} # => '192.168.0.0/18' MAC address filter ^^^^^^^^^^^^^^^^^^ You can use the ``hwaddr()`` filter to check if a given string is a MAC address or convert it between various formats. Examples: .. code-block:: yaml+jinja # Example MAC address macaddress: '1a:2b:3c:4d:5e:6f' # Check if given string is a MAC address # {{ macaddress | ansible.netcommon.hwaddr }} 1a:2b:3c:4d:5e:6f # Convert MAC address to PostgreSQL format # {{ macaddress | ansible.netcommon.hwaddr('pgsql') }} 1a2b3c:4d5e6f # Convert MAC address to Cisco format # {{ macaddress | ansible.netcommon.hwaddr('cisco') }} 1a2b.3c4d.5e6f The supported formats result in the following conversions for the ``1a:2b:3c:4d:5e:6f`` MAC address: .. code-block:: yaml+jinja bare: 1A2B3C4D5E6F bool: True int: 28772997619311 cisco: 1a2b.3c4d.5e6f eui48 or win: 1A-2B-3C-4D-5E-6F linux or unix: 1a:2b:3c:4d:5e:6f: pgsql, postgresql, or psql: 1a2b3c:4d5e6f Generate an IPv6 address in Stateless Configuration (SLAAC) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the filter ``slaac()`` generates an IPv6 address for a given network and a MAC Address in Stateless Configuration. .. code-block:: yaml+jinja # {{ 'fdcf:1894:23b5:d38c:0000:0000:0000:0000' | slaac('c2:31:b3:83:bf:2b') }} fdcf:1894:23b5:d38c:c031:b3ff:fe83:bf2b .. seealso:: `ansible.netcommon <https://galaxy.ansible.com/ansible/netcommon>`_ Ansible network collection for common code :ref:`about_playbooks` An introduction to playbooks :ref:`playbooks_filters` Introduction to Jinja2 filters and their uses :ref:`playbooks_conditionals` Conditional statements in playbooks :ref:`playbooks_variables` All about variables :ref:`playbooks_loops` Looping in playbooks :ref:`playbooks_reuse_roles` Playbook organization by roles :ref:`playbooks_best_practices` Tips and tricks for playbooks `User Mailing List <https://groups.google.com/group/ansible-devel>`_ Have a question? Stop by the google group! :ref:`communication_irc` How to join Ansible chat channels
closed
ansible/ansible
https://github.com/ansible/ansible
77,280
to_nice_yaml: cannot represent an object
### Summary With Ansible v2.9 (on Python v3.6.9 provided by Ubuntu 18.04 LTS) and Jinja2 v2.11.3, the playbook (see below) works to cast a number into a string in the YAML output: ```shell $ ansible-playbook convert.yml PLAY [localhost] ****************************************************************************************************************************************************************************** TASK [Gathering Facts] ************************************************************************************************************************************************************************ ok: [localhost] TASK [set_fact] ******************************************************************************************************************************************************************************* ok: [localhost] TASK [debug] ********************************************************************************************************************************************************************************** ok: [localhost] => { "bar": { "x": "my_string", "y": 5, "z": "300" } } TASK [debug] ********************************************************************************************************************************************************************************** ok: [localhost] => { "msg": "x: my_string\ny: 5\nz: '300'\n" } PLAY RECAP ************************************************************************************************************************************************************************************ localhost : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` Unfortunately, with an updated stack (ansible-core v2.12.3, Jinja2 v3.0.3, Python v3.8.0) I get a weird error `cannot represent an object`: ```shell $ ansible-playbook convert.yml [WARNING]: Unable to parse /etc/dallmeier/environment as an inventory source [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ****************************************************************************************************************************************************************************** TASK [Gathering Facts] ************************************************************************************************************************************************************************ ok: [localhost] TASK [set_fact] ******************************************************************************************************************************************************************************* ok: [localhost] TASK [debug] ********************************************************************************************************************************************************************************** ok: [localhost] => { "bar": { "x": "my_string", "y": 5, "z": "300" } } TASK [debug] ********************************************************************************************************************************************************************************** fatal: [localhost]: FAILED! => {"msg": "to_nice_yaml - ('cannot represent an object', '300'). ('cannot represent an object', '300')"} PLAY RECAP ************************************************************************************************************************************************************************************ localhost : ok=3 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` Jinja2 native is enabled in /etc/ansible/ansible.cfg: ```ini [defaults] jinja2_native = True ``` ### Issue Type Bug Report ### Component Name to_nice_yaml ### Ansible Version ```console $ ansible --version ansible [core 2.12.3] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/au/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible ansible collection location = /home/au/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible python version = 3.8.0 (default, Dec 9 2021, 17:53:27) [GCC 8.4.0] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed DEFAULT_ASK_VAULT_PASS(/etc/ansible/ansible.cfg) = False DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/foobar/environment'] DEFAULT_JINJA2_NATIVE(/etc/ansible/ansible.cfg) = True ``` ### OS / Environment Ubuntu Server 18.04.6 LTS ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) - hosts: localhost vars: foo: 300 tasks: - set_fact: bar: x: "my_string" y: 5 z: '{{ foo |string }}' - debug: var=bar - debug: msg='{{ bar |to_nice_yaml }}' ``` ### Expected Results `to_nice_yaml` displays the number as string ### Actual Results ```console TASK [set_fact] ******************************************************************************************************************************************************************************* task path: /home/au/src/asa-branches/v0.3/ansible/convert.yml:5 ok: [localhost] => { "ansible_facts": { "bar": { "x": "my_string", "y": 5, "z": "300" } }, "changed": false } TASK [debug] ********************************************************************************************************************************************************************************** task path: /home/au/src/asa-branches/v0.3/ansible/convert.yml:10 ok: [localhost] => { "bar": { "x": "my_string", "y": 5, "z": "300" } } TASK [debug] ********************************************************************************************************************************************************************************** task path: /home/au/src/asa-branches/v0.3/ansible/convert.yml:11 fatal: [localhost]: FAILED! => { "msg": "to_nice_yaml - ('cannot represent an object', '300'). ('cannot represent an object', '300')" } ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77280
https://github.com/ansible/ansible/pull/77282
d8687bd0158c23e63b13c8c05f83d1a8628e1264
c9db73f04e7a5fae7bbbdff8efbd585d15971d31
2022-03-14T17:27:04Z
python
2022-03-16T10:24:21Z
changelogs/fragments/nativejinjatext-yaml-representer.yml
closed
ansible/ansible
https://github.com/ansible/ansible
77,280
to_nice_yaml: cannot represent an object
### Summary With Ansible v2.9 (on Python v3.6.9 provided by Ubuntu 18.04 LTS) and Jinja2 v2.11.3, the playbook (see below) works to cast a number into a string in the YAML output: ```shell $ ansible-playbook convert.yml PLAY [localhost] ****************************************************************************************************************************************************************************** TASK [Gathering Facts] ************************************************************************************************************************************************************************ ok: [localhost] TASK [set_fact] ******************************************************************************************************************************************************************************* ok: [localhost] TASK [debug] ********************************************************************************************************************************************************************************** ok: [localhost] => { "bar": { "x": "my_string", "y": 5, "z": "300" } } TASK [debug] ********************************************************************************************************************************************************************************** ok: [localhost] => { "msg": "x: my_string\ny: 5\nz: '300'\n" } PLAY RECAP ************************************************************************************************************************************************************************************ localhost : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` Unfortunately, with an updated stack (ansible-core v2.12.3, Jinja2 v3.0.3, Python v3.8.0) I get a weird error `cannot represent an object`: ```shell $ ansible-playbook convert.yml [WARNING]: Unable to parse /etc/dallmeier/environment as an inventory source [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ****************************************************************************************************************************************************************************** TASK [Gathering Facts] ************************************************************************************************************************************************************************ ok: [localhost] TASK [set_fact] ******************************************************************************************************************************************************************************* ok: [localhost] TASK [debug] ********************************************************************************************************************************************************************************** ok: [localhost] => { "bar": { "x": "my_string", "y": 5, "z": "300" } } TASK [debug] ********************************************************************************************************************************************************************************** fatal: [localhost]: FAILED! => {"msg": "to_nice_yaml - ('cannot represent an object', '300'). ('cannot represent an object', '300')"} PLAY RECAP ************************************************************************************************************************************************************************************ localhost : ok=3 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` Jinja2 native is enabled in /etc/ansible/ansible.cfg: ```ini [defaults] jinja2_native = True ``` ### Issue Type Bug Report ### Component Name to_nice_yaml ### Ansible Version ```console $ ansible --version ansible [core 2.12.3] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/au/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible ansible collection location = /home/au/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible python version = 3.8.0 (default, Dec 9 2021, 17:53:27) [GCC 8.4.0] jinja version = 3.0.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed DEFAULT_ASK_VAULT_PASS(/etc/ansible/ansible.cfg) = False DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/foobar/environment'] DEFAULT_JINJA2_NATIVE(/etc/ansible/ansible.cfg) = True ``` ### OS / Environment Ubuntu Server 18.04.6 LTS ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) - hosts: localhost vars: foo: 300 tasks: - set_fact: bar: x: "my_string" y: 5 z: '{{ foo |string }}' - debug: var=bar - debug: msg='{{ bar |to_nice_yaml }}' ``` ### Expected Results `to_nice_yaml` displays the number as string ### Actual Results ```console TASK [set_fact] ******************************************************************************************************************************************************************************* task path: /home/au/src/asa-branches/v0.3/ansible/convert.yml:5 ok: [localhost] => { "ansible_facts": { "bar": { "x": "my_string", "y": 5, "z": "300" } }, "changed": false } TASK [debug] ********************************************************************************************************************************************************************************** task path: /home/au/src/asa-branches/v0.3/ansible/convert.yml:10 ok: [localhost] => { "bar": { "x": "my_string", "y": 5, "z": "300" } } TASK [debug] ********************************************************************************************************************************************************************************** task path: /home/au/src/asa-branches/v0.3/ansible/convert.yml:11 fatal: [localhost]: FAILED! => { "msg": "to_nice_yaml - ('cannot represent an object', '300'). ('cannot represent an object', '300')" } ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77280
https://github.com/ansible/ansible/pull/77282
d8687bd0158c23e63b13c8c05f83d1a8628e1264
c9db73f04e7a5fae7bbbdff8efbd585d15971d31
2022-03-14T17:27:04Z
python
2022-03-16T10:24:21Z
lib/ansible/parsing/yaml/dumper.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import yaml from ansible.module_utils.six import text_type, binary_type from ansible.module_utils.common.yaml import SafeDumper from ansible.parsing.yaml.objects import AnsibleUnicode, AnsibleSequence, AnsibleMapping, AnsibleVaultEncryptedUnicode from ansible.utils.unsafe_proxy import AnsibleUnsafeText, AnsibleUnsafeBytes, NativeJinjaUnsafeText from ansible.template import AnsibleUndefined from ansible.vars.hostvars import HostVars, HostVarsVars from ansible.vars.manager import VarsWithSources class AnsibleDumper(SafeDumper): ''' A simple stub class that allows us to add representers for our overridden object types. ''' def represent_hostvars(self, data): return self.represent_dict(dict(data)) # Note: only want to represent the encrypted data def represent_vault_encrypted_unicode(self, data): return self.represent_scalar(u'!vault', data._ciphertext.decode(), style='|') def represent_unicode(self, data): return yaml.representer.SafeRepresenter.represent_str(self, text_type(data)) def represent_binary(self, data): return yaml.representer.SafeRepresenter.represent_binary(self, binary_type(data)) def represent_undefined(self, data): # Here bool will ensure _fail_with_undefined_error happens # if the value is Undefined. # This happens because Jinja sets __bool__ on StrictUndefined return bool(data) AnsibleDumper.add_representer( AnsibleUnicode, represent_unicode, ) AnsibleDumper.add_representer( AnsibleUnsafeText, represent_unicode, ) AnsibleDumper.add_representer( AnsibleUnsafeBytes, represent_binary, ) AnsibleDumper.add_representer( HostVars, represent_hostvars, ) AnsibleDumper.add_representer( HostVarsVars, represent_hostvars, ) AnsibleDumper.add_representer( VarsWithSources, represent_hostvars, ) AnsibleDumper.add_representer( AnsibleSequence, yaml.representer.SafeRepresenter.represent_list, ) AnsibleDumper.add_representer( AnsibleMapping, yaml.representer.SafeRepresenter.represent_dict, ) AnsibleDumper.add_representer( AnsibleVaultEncryptedUnicode, represent_vault_encrypted_unicode, ) AnsibleDumper.add_representer( AnsibleUndefined, represent_undefined, ) AnsibleDumper.add_representer( NativeJinjaUnsafeText, represent_unicode, )
closed
ansible/ansible
https://github.com/ansible/ansible
73,699
Inventory cache not flushed when using `--flush-cache`
##### SUMMARY <!--- Explain the problem briefly below --> The inventory cache is not flushed when using `--flush-cache` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cli and/or inventory manager ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.9.16 config file = /home/ioguix/git/support/ansible/ansible.cfg configured module search path = ['/home/ioguix/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3/dist-packages/ansible executable location = /usr/bin/ansible python version = 3.7.3 (default, Jul 25 2020, 13:03:44) [GCC 8.3.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below DEFAULT_HOST_LIST(/tmp/prov/ansible/ansible.cfg) = ['/tmp/prov/ansible/bug_report.yml'] DEFAULT_INVENTORY_PLUGIN_PATH(/tmp/prov/ansible/ansible.cfg) = ['/tmp/prov/ansible'] INVENTORY_CACHE_ENABLED(/tmp/prov/ansible/ansible.cfg) = True INVENTORY_CACHE_PLUGIN(/tmp/prov/ansible/ansible.cfg) = jsonfile INVENTORY_CACHE_PLUGIN_CONNECTION(/tmp/prov/ansible/ansible.cfg) = . ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Debian 10 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ansible.cfg: ```yaml [defaults] inventory_plugins = . inventory = bug_report.yml [inventory] cache=True cache_plugin=jsonfile cache_connection=. ``` Dummy inventory plugin `bug_report.py` displaying if the inventory manager gave `cache=True` or not: ```python from ansible.plugins.inventory import BaseInventoryPlugin, Cacheable class InventoryModule(BaseInventoryPlugin, Cacheable): def parse(self, inventory, loader, path, cache=True): if cache: self.display.v("Using data from cache") else: self.display.v("Updating cache") ``` Inventory yaml file `bug_report.yml`: ```yml --- plugin: bug_report ``` ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> The doc state in https://docs.ansible.com/ansible/latest/dev_guide/developing_inventory.html#inventory-cache: > This value comes from the inventory manager and indicates whether the inventory is being refreshed (such as via --flush-cache or the meta task refresh_inventory). From previous minimal test case, as far as I understand the doc and comments in other inventory plugins, the first call, using `--flush-cache`, should output `Updating cache`. The second one should output `Using data from cache`: ```console $ ansible-playbook -v --list-hosts --flush-cache playbook-dummy.yml Using /tmp/prov/ansible/ansible.cfg as config file Updating cache [...] $ ansible-playbook -v --list-hosts playbook-dummy.yml Using /tmp/prov/ansible/ansible.cfg as config file Using data from cache [...] ``` ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> No matter if using `--flush-cache` or not, the inventory manager is setting `cache=True` and the plugin always shows `Using data from cache`: ```console $ ansible-playbook -v --list-hosts --flush-cache playbook-dummy.yml Using /tmp/prov/ansible/ansible.cfg as config file Using data from cache [...] $ ansible-playbook -v --list-hosts playbook-dummy.yml Using /tmp/prov/ansible/ansible.cfg as config file Using data from cache [...] ``` Looking at source code in current devel branch, it seems `--flush-cache` only flush facts and ignore inventory: https://github.com/ansible/ansible/blob/devel/lib/ansible/cli/playbook.py#L208
https://github.com/ansible/ansible/issues/73699
https://github.com/ansible/ansible/pull/77083
c9db73f04e7a5fae7bbbdff8efbd585d15971d31
94b73d66d53ca0df9911d1dd412bc1d3c9181b1b
2021-02-23T16:38:41Z
python
2022-03-17T18:15:03Z
changelogs/fragments/inventory_manager_flush_cache.yml
closed
ansible/ansible
https://github.com/ansible/ansible
73,699
Inventory cache not flushed when using `--flush-cache`
##### SUMMARY <!--- Explain the problem briefly below --> The inventory cache is not flushed when using `--flush-cache` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cli and/or inventory manager ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.9.16 config file = /home/ioguix/git/support/ansible/ansible.cfg configured module search path = ['/home/ioguix/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3/dist-packages/ansible executable location = /usr/bin/ansible python version = 3.7.3 (default, Jul 25 2020, 13:03:44) [GCC 8.3.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below DEFAULT_HOST_LIST(/tmp/prov/ansible/ansible.cfg) = ['/tmp/prov/ansible/bug_report.yml'] DEFAULT_INVENTORY_PLUGIN_PATH(/tmp/prov/ansible/ansible.cfg) = ['/tmp/prov/ansible'] INVENTORY_CACHE_ENABLED(/tmp/prov/ansible/ansible.cfg) = True INVENTORY_CACHE_PLUGIN(/tmp/prov/ansible/ansible.cfg) = jsonfile INVENTORY_CACHE_PLUGIN_CONNECTION(/tmp/prov/ansible/ansible.cfg) = . ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Debian 10 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ansible.cfg: ```yaml [defaults] inventory_plugins = . inventory = bug_report.yml [inventory] cache=True cache_plugin=jsonfile cache_connection=. ``` Dummy inventory plugin `bug_report.py` displaying if the inventory manager gave `cache=True` or not: ```python from ansible.plugins.inventory import BaseInventoryPlugin, Cacheable class InventoryModule(BaseInventoryPlugin, Cacheable): def parse(self, inventory, loader, path, cache=True): if cache: self.display.v("Using data from cache") else: self.display.v("Updating cache") ``` Inventory yaml file `bug_report.yml`: ```yml --- plugin: bug_report ``` ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> The doc state in https://docs.ansible.com/ansible/latest/dev_guide/developing_inventory.html#inventory-cache: > This value comes from the inventory manager and indicates whether the inventory is being refreshed (such as via --flush-cache or the meta task refresh_inventory). From previous minimal test case, as far as I understand the doc and comments in other inventory plugins, the first call, using `--flush-cache`, should output `Updating cache`. The second one should output `Using data from cache`: ```console $ ansible-playbook -v --list-hosts --flush-cache playbook-dummy.yml Using /tmp/prov/ansible/ansible.cfg as config file Updating cache [...] $ ansible-playbook -v --list-hosts playbook-dummy.yml Using /tmp/prov/ansible/ansible.cfg as config file Using data from cache [...] ``` ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> No matter if using `--flush-cache` or not, the inventory manager is setting `cache=True` and the plugin always shows `Using data from cache`: ```console $ ansible-playbook -v --list-hosts --flush-cache playbook-dummy.yml Using /tmp/prov/ansible/ansible.cfg as config file Using data from cache [...] $ ansible-playbook -v --list-hosts playbook-dummy.yml Using /tmp/prov/ansible/ansible.cfg as config file Using data from cache [...] ``` Looking at source code in current devel branch, it seems `--flush-cache` only flush facts and ignore inventory: https://github.com/ansible/ansible/blob/devel/lib/ansible/cli/playbook.py#L208
https://github.com/ansible/ansible/issues/73699
https://github.com/ansible/ansible/pull/77083
c9db73f04e7a5fae7bbbdff8efbd585d15971d31
94b73d66d53ca0df9911d1dd412bc1d3c9181b1b
2021-02-23T16:38:41Z
python
2022-03-17T18:15:03Z
lib/ansible/cli/__init__.py
# Copyright: (c) 2012-2014, Michael DeHaan <[email protected]> # Copyright: (c) 2016, Toshio Kuratomi <[email protected]> # Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import sys # Used for determining if the system is running a new enough python version # and should only restrict on our documented minimum versions if sys.version_info < (3, 8): raise SystemExit( 'ERROR: Ansible requires Python 3.8 or newer on the controller. ' 'Current version: %s' % ''.join(sys.version.splitlines()) ) from importlib.metadata import version from ansible.module_utils.compat.version import LooseVersion # Used for determining if the system is running a new enough Jinja2 version # and should only restrict on our documented minimum versions jinja2_version = version('jinja2') if jinja2_version < LooseVersion('3.0'): raise SystemExit( 'ERROR: Ansible requires Jinja2 3.0 or newer on the controller. ' 'Current version: %s' % jinja2_version ) import errno import getpass import os import subprocess import traceback from abc import ABC, abstractmethod from pathlib import Path try: from ansible import constants as C from ansible.utils.display import Display, initialize_locale initialize_locale() display = Display() except Exception as e: print('ERROR: %s' % e, file=sys.stderr) sys.exit(5) from ansible import context from ansible.cli.arguments import option_helpers as opt_help from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleParserError from ansible.inventory.manager import InventoryManager from ansible.module_utils.six import string_types from ansible.module_utils._text import to_bytes, to_text from ansible.module_utils.common.file import is_executable from ansible.parsing.dataloader import DataLoader from ansible.parsing.vault import PromptVaultSecret, get_file_vault_secret from ansible.plugins.loader import add_all_plugin_dirs from ansible.release import __version__ from ansible.utils.collection_loader import AnsibleCollectionConfig from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path from ansible.utils.path import unfrackpath from ansible.utils.unsafe_proxy import to_unsafe_text from ansible.vars.manager import VariableManager try: import argcomplete HAS_ARGCOMPLETE = True except ImportError: HAS_ARGCOMPLETE = False class CLI(ABC): ''' code behind bin/ansible* programs ''' PAGER = 'less' # -F (quit-if-one-screen) -R (allow raw ansi control chars) # -S (chop long lines) -X (disable termcap init and de-init) LESS_OPTS = 'FRSX' SKIP_INVENTORY_DEFAULTS = False def __init__(self, args, callback=None): """ Base init method for all command line programs """ if not args: raise ValueError('A non-empty list for args is required') self.args = args self.parser = None self.callback = callback if C.DEVEL_WARNING and __version__.endswith('dev0'): display.warning( 'You are running the development version of Ansible. You should only run Ansible from "devel" if ' 'you are modifying the Ansible engine, or trying out features under development. This is a rapidly ' 'changing source of code and can become unstable at any point.' ) @abstractmethod def run(self): """Run the ansible command Subclasses must implement this method. It does the actual work of running an Ansible command. """ self.parse() display.vv(to_text(opt_help.version(self.parser.prog))) if C.CONFIG_FILE: display.v(u"Using %s as config file" % to_text(C.CONFIG_FILE)) else: display.v(u"No config file found; using defaults") # warn about deprecated config options for deprecated in C.config.DEPRECATED: name = deprecated[0] why = deprecated[1]['why'] if 'alternatives' in deprecated[1]: alt = ', use %s instead' % deprecated[1]['alternatives'] else: alt = '' ver = deprecated[1].get('version') date = deprecated[1].get('date') collection_name = deprecated[1].get('collection_name') display.deprecated("%s option, %s%s" % (name, why, alt), version=ver, date=date, collection_name=collection_name) @staticmethod def split_vault_id(vault_id): # return (before_@, after_@) # if no @, return whole string as after_ if '@' not in vault_id: return (None, vault_id) parts = vault_id.split('@', 1) ret = tuple(parts) return ret @staticmethod def build_vault_ids(vault_ids, vault_password_files=None, ask_vault_pass=None, create_new_password=None, auto_prompt=True): vault_password_files = vault_password_files or [] vault_ids = vault_ids or [] # convert vault_password_files into vault_ids slugs for password_file in vault_password_files: id_slug = u'%s@%s' % (C.DEFAULT_VAULT_IDENTITY, password_file) # note this makes --vault-id higher precedence than --vault-password-file # if we want to intertwingle them in order probably need a cli callback to populate vault_ids # used by --vault-id and --vault-password-file vault_ids.append(id_slug) # if an action needs an encrypt password (create_new_password=True) and we dont # have other secrets setup, then automatically add a password prompt as well. # prompts cant/shouldnt work without a tty, so dont add prompt secrets if ask_vault_pass or (not vault_ids and auto_prompt): id_slug = u'%s@%s' % (C.DEFAULT_VAULT_IDENTITY, u'prompt_ask_vault_pass') vault_ids.append(id_slug) return vault_ids # TODO: remove the now unused args @staticmethod def setup_vault_secrets(loader, vault_ids, vault_password_files=None, ask_vault_pass=None, create_new_password=False, auto_prompt=True): # list of tuples vault_secrets = [] # Depending on the vault_id value (including how --ask-vault-pass / --vault-password-file create a vault_id) # we need to show different prompts. This is for compat with older Towers that expect a # certain vault password prompt format, so 'promp_ask_vault_pass' vault_id gets the old format. prompt_formats = {} # If there are configured default vault identities, they are considered 'first' # so we prepend them to vault_ids (from cli) here vault_password_files = vault_password_files or [] if C.DEFAULT_VAULT_PASSWORD_FILE: vault_password_files.append(C.DEFAULT_VAULT_PASSWORD_FILE) if create_new_password: prompt_formats['prompt'] = ['New vault password (%(vault_id)s): ', 'Confirm new vault password (%(vault_id)s): '] # 2.3 format prompts for --ask-vault-pass prompt_formats['prompt_ask_vault_pass'] = ['New Vault password: ', 'Confirm New Vault password: '] else: prompt_formats['prompt'] = ['Vault password (%(vault_id)s): '] # The format when we use just --ask-vault-pass needs to match 'Vault password:\s*?$' prompt_formats['prompt_ask_vault_pass'] = ['Vault password: '] vault_ids = CLI.build_vault_ids(vault_ids, vault_password_files, ask_vault_pass, create_new_password, auto_prompt=auto_prompt) last_exception = found_vault_secret = None for vault_id_slug in vault_ids: vault_id_name, vault_id_value = CLI.split_vault_id(vault_id_slug) if vault_id_value in ['prompt', 'prompt_ask_vault_pass']: # --vault-id some_name@prompt_ask_vault_pass --vault-id other_name@prompt_ask_vault_pass will be a little # confusing since it will use the old format without the vault id in the prompt built_vault_id = vault_id_name or C.DEFAULT_VAULT_IDENTITY # choose the prompt based on --vault-id=prompt or --ask-vault-pass. --ask-vault-pass # always gets the old format for Tower compatibility. # ie, we used --ask-vault-pass, so we need to use the old vault password prompt # format since Tower needs to match on that format. prompted_vault_secret = PromptVaultSecret(prompt_formats=prompt_formats[vault_id_value], vault_id=built_vault_id) # a empty or invalid password from the prompt will warn and continue to the next # without erroring globally try: prompted_vault_secret.load() except AnsibleError as exc: display.warning('Error in vault password prompt (%s): %s' % (vault_id_name, exc)) raise found_vault_secret = True vault_secrets.append((built_vault_id, prompted_vault_secret)) # update loader with new secrets incrementally, so we can load a vault password # that is encrypted with a vault secret provided earlier loader.set_vault_secrets(vault_secrets) continue # assuming anything else is a password file display.vvvvv('Reading vault password file: %s' % vault_id_value) # read vault_pass from a file try: file_vault_secret = get_file_vault_secret(filename=vault_id_value, vault_id=vault_id_name, loader=loader) except AnsibleError as exc: display.warning('Error getting vault password file (%s): %s' % (vault_id_name, to_text(exc))) last_exception = exc continue try: file_vault_secret.load() except AnsibleError as exc: display.warning('Error in vault password file loading (%s): %s' % (vault_id_name, to_text(exc))) last_exception = exc continue found_vault_secret = True if vault_id_name: vault_secrets.append((vault_id_name, file_vault_secret)) else: vault_secrets.append((C.DEFAULT_VAULT_IDENTITY, file_vault_secret)) # update loader with as-yet-known vault secrets loader.set_vault_secrets(vault_secrets) # An invalid or missing password file will error globally # if no valid vault secret was found. if last_exception and not found_vault_secret: raise last_exception return vault_secrets @staticmethod def _get_secret(prompt): secret = getpass.getpass(prompt=prompt) if secret: secret = to_unsafe_text(secret) return secret @staticmethod def ask_passwords(): ''' prompt for connection and become passwords if needed ''' op = context.CLIARGS sshpass = None becomepass = None become_prompt = '' become_prompt_method = "BECOME" if C.AGNOSTIC_BECOME_PROMPT else op['become_method'].upper() try: become_prompt = "%s password: " % become_prompt_method if op['ask_pass']: sshpass = CLI._get_secret("SSH password: ") become_prompt = "%s password[defaults to SSH password]: " % become_prompt_method elif op['connection_password_file']: sshpass = CLI.get_password_from_file(op['connection_password_file']) if op['become_ask_pass']: becomepass = CLI._get_secret(become_prompt) if op['ask_pass'] and becomepass == '': becomepass = sshpass elif op['become_password_file']: becomepass = CLI.get_password_from_file(op['become_password_file']) except EOFError: pass return (sshpass, becomepass) def validate_conflicts(self, op, runas_opts=False, fork_opts=False): ''' check for conflicting options ''' if fork_opts: if op.forks < 1: self.parser.error("The number of processes (--forks) must be >= 1") return op @abstractmethod def init_parser(self, usage="", desc=None, epilog=None): """ Create an options parser for most ansible scripts Subclasses need to implement this method. They will usually call the base class's init_parser to create a basic version and then add their own options on top of that. An implementation will look something like this:: def init_parser(self): super(MyCLI, self).init_parser(usage="My Ansible CLI", inventory_opts=True) ansible.arguments.option_helpers.add_runas_options(self.parser) self.parser.add_option('--my-option', dest='my_option', action='store') """ self.parser = opt_help.create_base_parser(self.name, usage=usage, desc=desc, epilog=epilog) @abstractmethod def post_process_args(self, options): """Process the command line args Subclasses need to implement this method. This method validates and transforms the command line arguments. It can be used to check whether conflicting values were given, whether filenames exist, etc. An implementation will look something like this:: def post_process_args(self, options): options = super(MyCLI, self).post_process_args(options) if options.addition and options.subtraction: raise AnsibleOptionsError('Only one of --addition and --subtraction can be specified') if isinstance(options.listofhosts, string_types): options.listofhosts = string_types.split(',') return options """ # process tags if hasattr(options, 'tags') and not options.tags: # optparse defaults does not do what's expected # More specifically, we want `--tags` to be additive. So we cannot # simply change C.TAGS_RUN's default to ["all"] because then passing # --tags foo would cause us to have ['all', 'foo'] options.tags = ['all'] if hasattr(options, 'tags') and options.tags: tags = set() for tag_set in options.tags: for tag in tag_set.split(u','): tags.add(tag.strip()) options.tags = list(tags) # process skip_tags if hasattr(options, 'skip_tags') and options.skip_tags: skip_tags = set() for tag_set in options.skip_tags: for tag in tag_set.split(u','): skip_tags.add(tag.strip()) options.skip_tags = list(skip_tags) # process inventory options except for CLIs that require their own processing if hasattr(options, 'inventory') and not self.SKIP_INVENTORY_DEFAULTS: if options.inventory: # should always be list if isinstance(options.inventory, string_types): options.inventory = [options.inventory] # Ensure full paths when needed options.inventory = [unfrackpath(opt, follow=False) if ',' not in opt else opt for opt in options.inventory] else: options.inventory = C.DEFAULT_HOST_LIST return options def parse(self): """Parse the command line args This method parses the command line arguments. It uses the parser stored in the self.parser attribute and saves the args and options in context.CLIARGS. Subclasses need to implement two helper methods, init_parser() and post_process_args() which are called from this function before and after parsing the arguments. """ self.init_parser() if HAS_ARGCOMPLETE: argcomplete.autocomplete(self.parser) try: options = self.parser.parse_args(self.args[1:]) except SystemExit as e: if(e.code != 0): self.parser.exit(status=2, message=" \n%s" % self.parser.format_help()) raise options = self.post_process_args(options) context._init_global_context(options) @staticmethod def version_info(gitinfo=False): ''' return full ansible version info ''' if gitinfo: # expensive call, user with care ansible_version_string = opt_help.version() else: ansible_version_string = __version__ ansible_version = ansible_version_string.split()[0] ansible_versions = ansible_version.split('.') for counter in range(len(ansible_versions)): if ansible_versions[counter] == "": ansible_versions[counter] = 0 try: ansible_versions[counter] = int(ansible_versions[counter]) except Exception: pass if len(ansible_versions) < 3: for counter in range(len(ansible_versions), 3): ansible_versions.append(0) return {'string': ansible_version_string.strip(), 'full': ansible_version, 'major': ansible_versions[0], 'minor': ansible_versions[1], 'revision': ansible_versions[2]} @staticmethod def pager(text): ''' find reasonable way to display text ''' # this is a much simpler form of what is in pydoc.py if not sys.stdout.isatty(): display.display(text, screen_only=True) elif 'PAGER' in os.environ: if sys.platform == 'win32': display.display(text, screen_only=True) else: CLI.pager_pipe(text, os.environ['PAGER']) else: p = subprocess.Popen('less --version', shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) p.communicate() if p.returncode == 0: CLI.pager_pipe(text, 'less') else: display.display(text, screen_only=True) @staticmethod def pager_pipe(text, cmd): ''' pipe text through a pager ''' if 'LESS' not in os.environ: os.environ['LESS'] = CLI.LESS_OPTS try: cmd = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=sys.stdout) cmd.communicate(input=to_bytes(text)) except IOError: pass except KeyboardInterrupt: pass @staticmethod def _play_prereqs(): options = context.CLIARGS # all needs loader loader = DataLoader() basedir = options.get('basedir', False) if basedir: loader.set_basedir(basedir) add_all_plugin_dirs(basedir) AnsibleCollectionConfig.playbook_paths = basedir default_collection = _get_collection_name_from_path(basedir) if default_collection: display.warning(u'running with default collection {0}'.format(default_collection)) AnsibleCollectionConfig.default_collection = default_collection vault_ids = list(options['vault_ids']) default_vault_ids = C.DEFAULT_VAULT_IDENTITY_LIST vault_ids = default_vault_ids + vault_ids vault_secrets = CLI.setup_vault_secrets(loader, vault_ids=vault_ids, vault_password_files=list(options['vault_password_files']), ask_vault_pass=options['ask_vault_pass'], auto_prompt=False) loader.set_vault_secrets(vault_secrets) # create the inventory, and filter it based on the subset specified (if any) inventory = InventoryManager(loader=loader, sources=options['inventory']) # create the variable manager, which will be shared throughout # the code, ensuring a consistent view of global variables variable_manager = VariableManager(loader=loader, inventory=inventory, version_info=CLI.version_info(gitinfo=False)) return loader, inventory, variable_manager @staticmethod def get_host_list(inventory, subset, pattern='all'): no_hosts = False if len(inventory.list_hosts()) == 0: # Empty inventory if C.LOCALHOST_WARNING and pattern not in C.LOCALHOST: display.warning("provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'") no_hosts = True inventory.subset(subset) hosts = inventory.list_hosts(pattern) if not hosts and no_hosts is False: raise AnsibleError("Specified hosts and/or --limit does not match any hosts") return hosts @staticmethod def get_password_from_file(pwd_file): b_pwd_file = to_bytes(pwd_file) secret = None if b_pwd_file == b'-': # ensure its read as bytes secret = sys.stdin.buffer.read() elif not os.path.exists(b_pwd_file): raise AnsibleError("The password file %s was not found" % pwd_file) elif is_executable(b_pwd_file): display.vvvv(u'The password file %s is a script.' % to_text(pwd_file)) cmd = [b_pwd_file] try: p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) except OSError as e: raise AnsibleError("Problem occured when trying to run the password script %s (%s)." " If this is not a script, remove the executable bit from the file." % (pwd_file, e)) stdout, stderr = p.communicate() if p.returncode != 0: raise AnsibleError("The password script %s returned an error (rc=%s): %s" % (pwd_file, p.returncode, stderr)) secret = stdout else: try: f = open(b_pwd_file, "rb") secret = f.read().strip() f.close() except (OSError, IOError) as e: raise AnsibleError("Could not read password file %s: %s" % (pwd_file, e)) secret = secret.strip(b'\r\n') if not secret: raise AnsibleError('Empty password was provided from file (%s)' % pwd_file) return to_unsafe_text(secret) @classmethod def cli_executor(cls, args=None): if args is None: args = sys.argv try: display.debug("starting run") ansible_dir = Path("~/.ansible").expanduser() try: ansible_dir.mkdir(mode=0o700) except OSError as exc: if exc.errno != errno.EEXIST: display.warning( "Failed to create the directory '%s': %s" % (ansible_dir, to_text(exc, errors='surrogate_or_replace')) ) else: display.debug("Created the '%s' directory" % ansible_dir) try: args = [to_text(a, errors='surrogate_or_strict') for a in args] except UnicodeError: display.error('Command line args are not in utf-8, unable to continue. Ansible currently only understands utf-8') display.display(u"The full traceback was:\n\n%s" % to_text(traceback.format_exc())) exit_code = 6 else: cli = cls(args) exit_code = cli.run() except AnsibleOptionsError as e: cli.parser.print_help() display.error(to_text(e), wrap_text=False) exit_code = 5 except AnsibleParserError as e: display.error(to_text(e), wrap_text=False) exit_code = 4 # TQM takes care of these, but leaving comment to reserve the exit codes # except AnsibleHostUnreachable as e: # display.error(str(e)) # exit_code = 3 # except AnsibleHostFailed as e: # display.error(str(e)) # exit_code = 2 except AnsibleError as e: display.error(to_text(e), wrap_text=False) exit_code = 1 except KeyboardInterrupt: display.error("User interrupted execution") exit_code = 99 except Exception as e: if C.DEFAULT_DEBUG: # Show raw stacktraces in debug mode, It also allow pdb to # enter post mortem mode. raise have_cli_options = bool(context.CLIARGS) display.error("Unexpected Exception, this is probably a bug: %s" % to_text(e), wrap_text=False) if not have_cli_options or have_cli_options and context.CLIARGS['verbosity'] > 2: log_only = False if hasattr(e, 'orig_exc'): display.vvv('\nexception type: %s' % to_text(type(e.orig_exc))) why = to_text(e.orig_exc) if to_text(e) != why: display.vvv('\noriginal msg: %s' % why) else: display.display("to see the full traceback, use -vvv") log_only = True display.display(u"the full traceback was:\n\n%s" % to_text(traceback.format_exc()), log_only=log_only) exit_code = 250 sys.exit(exit_code)
closed
ansible/ansible
https://github.com/ansible/ansible
73,699
Inventory cache not flushed when using `--flush-cache`
##### SUMMARY <!--- Explain the problem briefly below --> The inventory cache is not flushed when using `--flush-cache` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cli and/or inventory manager ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.9.16 config file = /home/ioguix/git/support/ansible/ansible.cfg configured module search path = ['/home/ioguix/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3/dist-packages/ansible executable location = /usr/bin/ansible python version = 3.7.3 (default, Jul 25 2020, 13:03:44) [GCC 8.3.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below DEFAULT_HOST_LIST(/tmp/prov/ansible/ansible.cfg) = ['/tmp/prov/ansible/bug_report.yml'] DEFAULT_INVENTORY_PLUGIN_PATH(/tmp/prov/ansible/ansible.cfg) = ['/tmp/prov/ansible'] INVENTORY_CACHE_ENABLED(/tmp/prov/ansible/ansible.cfg) = True INVENTORY_CACHE_PLUGIN(/tmp/prov/ansible/ansible.cfg) = jsonfile INVENTORY_CACHE_PLUGIN_CONNECTION(/tmp/prov/ansible/ansible.cfg) = . ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Debian 10 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ansible.cfg: ```yaml [defaults] inventory_plugins = . inventory = bug_report.yml [inventory] cache=True cache_plugin=jsonfile cache_connection=. ``` Dummy inventory plugin `bug_report.py` displaying if the inventory manager gave `cache=True` or not: ```python from ansible.plugins.inventory import BaseInventoryPlugin, Cacheable class InventoryModule(BaseInventoryPlugin, Cacheable): def parse(self, inventory, loader, path, cache=True): if cache: self.display.v("Using data from cache") else: self.display.v("Updating cache") ``` Inventory yaml file `bug_report.yml`: ```yml --- plugin: bug_report ``` ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> The doc state in https://docs.ansible.com/ansible/latest/dev_guide/developing_inventory.html#inventory-cache: > This value comes from the inventory manager and indicates whether the inventory is being refreshed (such as via --flush-cache or the meta task refresh_inventory). From previous minimal test case, as far as I understand the doc and comments in other inventory plugins, the first call, using `--flush-cache`, should output `Updating cache`. The second one should output `Using data from cache`: ```console $ ansible-playbook -v --list-hosts --flush-cache playbook-dummy.yml Using /tmp/prov/ansible/ansible.cfg as config file Updating cache [...] $ ansible-playbook -v --list-hosts playbook-dummy.yml Using /tmp/prov/ansible/ansible.cfg as config file Using data from cache [...] ``` ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> No matter if using `--flush-cache` or not, the inventory manager is setting `cache=True` and the plugin always shows `Using data from cache`: ```console $ ansible-playbook -v --list-hosts --flush-cache playbook-dummy.yml Using /tmp/prov/ansible/ansible.cfg as config file Using data from cache [...] $ ansible-playbook -v --list-hosts playbook-dummy.yml Using /tmp/prov/ansible/ansible.cfg as config file Using data from cache [...] ``` Looking at source code in current devel branch, it seems `--flush-cache` only flush facts and ignore inventory: https://github.com/ansible/ansible/blob/devel/lib/ansible/cli/playbook.py#L208
https://github.com/ansible/ansible/issues/73699
https://github.com/ansible/ansible/pull/77083
c9db73f04e7a5fae7bbbdff8efbd585d15971d31
94b73d66d53ca0df9911d1dd412bc1d3c9181b1b
2021-02-23T16:38:41Z
python
2022-03-17T18:15:03Z
lib/ansible/inventory/manager.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. ############################################# from __future__ import (absolute_import, division, print_function) __metaclass__ = type import fnmatch import os import sys import re import itertools import traceback from operator import attrgetter from random import shuffle from ansible import constants as C from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleParserError from ansible.inventory.data import InventoryData from ansible.module_utils.six import string_types from ansible.module_utils._text import to_bytes, to_text from ansible.parsing.utils.addresses import parse_address from ansible.plugins.loader import inventory_loader from ansible.utils.helpers import deduplicate_list from ansible.utils.path import unfrackpath from ansible.utils.display import Display from ansible.utils.vars import combine_vars from ansible.vars.plugins import get_vars_from_inventory_sources display = Display() IGNORED_ALWAYS = [br"^\.", b"^host_vars$", b"^group_vars$", b"^vars_plugins$"] IGNORED_PATTERNS = [to_bytes(x) for x in C.INVENTORY_IGNORE_PATTERNS] IGNORED_EXTS = [b'%s$' % to_bytes(re.escape(x)) for x in C.INVENTORY_IGNORE_EXTS] IGNORED = re.compile(b'|'.join(IGNORED_ALWAYS + IGNORED_PATTERNS + IGNORED_EXTS)) PATTERN_WITH_SUBSCRIPT = re.compile( r'''^ (.+) # A pattern expression ending with... \[(?: # A [subscript] expression comprising: (-?[0-9]+)| # A single positive or negative number ([0-9]+)([:-]) # Or an x:y or x: range. ([0-9]*) )\] $ ''', re.X ) def order_patterns(patterns): ''' takes a list of patterns and reorders them by modifier to apply them consistently ''' # FIXME: this goes away if we apply patterns incrementally or by groups pattern_regular = [] pattern_intersection = [] pattern_exclude = [] for p in patterns: if not p: continue if p[0] == "!": pattern_exclude.append(p) elif p[0] == "&": pattern_intersection.append(p) else: pattern_regular.append(p) # if no regular pattern was given, hence only exclude and/or intersection # make that magically work if pattern_regular == []: pattern_regular = ['all'] # when applying the host selectors, run those without the "&" or "!" # first, then the &s, then the !s. return pattern_regular + pattern_intersection + pattern_exclude def split_host_pattern(pattern): """ Takes a string containing host patterns separated by commas (or a list thereof) and returns a list of single patterns (which may not contain commas). Whitespace is ignored. Also accepts ':' as a separator for backwards compatibility, but it is not recommended due to the conflict with IPv6 addresses and host ranges. Example: 'a,b[1], c[2:3] , d' -> ['a', 'b[1]', 'c[2:3]', 'd'] """ if isinstance(pattern, list): results = (split_host_pattern(p) for p in pattern) # flatten the results return list(itertools.chain.from_iterable(results)) elif not isinstance(pattern, string_types): pattern = to_text(pattern, errors='surrogate_or_strict') # If it's got commas in it, we'll treat it as a straightforward # comma-separated list of patterns. if u',' in pattern: patterns = pattern.split(u',') # If it doesn't, it could still be a single pattern. This accounts for # non-separator uses of colons: IPv6 addresses and [x:y] host ranges. else: try: (base, port) = parse_address(pattern, allow_ranges=True) patterns = [pattern] except Exception: # The only other case we accept is a ':'-separated list of patterns. # This mishandles IPv6 addresses, and is retained only for backwards # compatibility. patterns = re.findall( to_text(r'''(?: # We want to match something comprising: [^\s:\[\]] # (anything other than whitespace or ':[]' | # ...or... \[[^\]]*\] # a single complete bracketed expression) )+ # occurring once or more '''), pattern, re.X ) return [p.strip() for p in patterns if p.strip()] class InventoryManager(object): ''' Creates and manages inventory ''' def __init__(self, loader, sources=None, parse=True): # base objects self._loader = loader self._inventory = InventoryData() # a list of host(names) to contain current inquiries to self._restriction = None self._subset = None # caches self._hosts_patterns_cache = {} # resolved full patterns self._pattern_cache = {} # resolved individual patterns # the inventory dirs, files, script paths or lists of hosts if sources is None: self._sources = [] elif isinstance(sources, string_types): self._sources = [sources] else: self._sources = sources # get to work! if parse: self.parse_sources(cache=True) @property def localhost(self): return self._inventory.localhost @property def groups(self): return self._inventory.groups @property def hosts(self): return self._inventory.hosts def add_host(self, host, group=None, port=None): return self._inventory.add_host(host, group, port) def add_group(self, group): return self._inventory.add_group(group) def get_groups_dict(self): return self._inventory.get_groups_dict() def reconcile_inventory(self): self.clear_caches() return self._inventory.reconcile_inventory() def get_host(self, hostname): return self._inventory.get_host(hostname) def _fetch_inventory_plugins(self): ''' sets up loaded inventory plugins for usage ''' display.vvvv('setting up inventory plugins') plugins = [] for name in C.INVENTORY_ENABLED: plugin = inventory_loader.get(name) if plugin: plugins.append(plugin) else: display.warning('Failed to load inventory plugin, skipping %s' % name) if not plugins: raise AnsibleError("No inventory plugins available to generate inventory, make sure you have at least one enabled.") return plugins def parse_sources(self, cache=False): ''' iterate over inventory sources and parse each one to populate it''' parsed = False # allow for multiple inventory parsing for source in self._sources: if source: if ',' not in source: source = unfrackpath(source, follow=False) parse = self.parse_source(source, cache=cache) if parse and not parsed: parsed = True if parsed: # do post processing self._inventory.reconcile_inventory() else: if C.INVENTORY_UNPARSED_IS_FAILED: raise AnsibleError("No inventory was parsed, please check your configuration and options.") else: display.warning("No inventory was parsed, only implicit localhost is available") for group in self.groups.values(): group.vars = combine_vars(group.vars, get_vars_from_inventory_sources(self._loader, self._sources, [group], 'inventory')) for host in self.hosts.values(): host.vars = combine_vars(host.vars, get_vars_from_inventory_sources(self._loader, self._sources, [host], 'inventory')) def parse_source(self, source, cache=False): ''' Generate or update inventory for the source provided ''' parsed = False failures = [] display.debug(u'Examining possible inventory source: %s' % source) # use binary for path functions b_source = to_bytes(source) # process directories as a collection of inventories if os.path.isdir(b_source): display.debug(u'Searching for inventory files in directory: %s' % source) for i in sorted(os.listdir(b_source)): display.debug(u'Considering %s' % i) # Skip hidden files and stuff we explicitly ignore if IGNORED.search(i): continue # recursively deal with directory entries fullpath = to_text(os.path.join(b_source, i), errors='surrogate_or_strict') parsed_this_one = self.parse_source(fullpath, cache=cache) display.debug(u'parsed %s as %s' % (fullpath, parsed_this_one)) if not parsed: parsed = parsed_this_one else: # left with strings or files, let plugins figure it out # set so new hosts can use for inventory_file/dir vars self._inventory.current_source = source # try source with each plugin for plugin in self._fetch_inventory_plugins(): plugin_name = to_text(getattr(plugin, '_load_name', getattr(plugin, '_original_path', ''))) display.debug(u'Attempting to use plugin %s (%s)' % (plugin_name, plugin._original_path)) # initialize and figure out if plugin wants to attempt parsing this file try: plugin_wants = bool(plugin.verify_file(source)) except Exception: plugin_wants = False if plugin_wants: try: # FIXME in case plugin fails 1/2 way we have partial inventory plugin.parse(self._inventory, self._loader, source, cache=cache) try: plugin.update_cache_if_changed() except AttributeError: # some plugins might not implement caching pass parsed = True display.vvv('Parsed %s inventory source with %s plugin' % (source, plugin_name)) break except AnsibleParserError as e: display.debug('%s was not parsable by %s' % (source, plugin_name)) tb = ''.join(traceback.format_tb(sys.exc_info()[2])) failures.append({'src': source, 'plugin': plugin_name, 'exc': e, 'tb': tb}) except Exception as e: display.debug('%s failed while attempting to parse %s' % (plugin_name, source)) tb = ''.join(traceback.format_tb(sys.exc_info()[2])) failures.append({'src': source, 'plugin': plugin_name, 'exc': AnsibleError(e), 'tb': tb}) else: display.vvv("%s declined parsing %s as it did not pass its verify_file() method" % (plugin_name, source)) if parsed: self._inventory.processed_sources.append(self._inventory.current_source) else: # only warn/error if NOT using the default or using it and the file is present # TODO: handle 'non file' inventory and detect vs hardcode default if source != '/etc/ansible/hosts' or os.path.exists(source): if failures: # only if no plugin processed files should we show errors. for fail in failures: display.warning(u'\n* Failed to parse %s with %s plugin: %s' % (to_text(fail['src']), fail['plugin'], to_text(fail['exc']))) if 'tb' in fail: display.vvv(to_text(fail['tb'])) # final error/warning on inventory source failure if C.INVENTORY_ANY_UNPARSED_IS_FAILED: raise AnsibleError(u'Completely failed to parse inventory source %s' % (source)) else: display.warning("Unable to parse %s as an inventory source" % source) # clear up, jic self._inventory.current_source = None return parsed def clear_caches(self): ''' clear all caches ''' self._hosts_patterns_cache = {} self._pattern_cache = {} # FIXME: flush inventory cache def refresh_inventory(self): ''' recalculate inventory ''' self.clear_caches() self._inventory = InventoryData() self.parse_sources(cache=False) def _match_list(self, items, pattern_str): # compile patterns try: if not pattern_str[0] == '~': pattern = re.compile(fnmatch.translate(pattern_str)) else: pattern = re.compile(pattern_str[1:]) except Exception: raise AnsibleError('Invalid host list pattern: %s' % pattern_str) # apply patterns results = [] for item in items: if pattern.match(item): results.append(item) return results def get_hosts(self, pattern="all", ignore_limits=False, ignore_restrictions=False, order=None): """ Takes a pattern or list of patterns and returns a list of matching inventory host names, taking into account any active restrictions or applied subsets """ hosts = [] # Check if pattern already computed if isinstance(pattern, list): pattern_list = pattern[:] else: pattern_list = [pattern] if pattern_list: if not ignore_limits and self._subset: pattern_list.extend(self._subset) if not ignore_restrictions and self._restriction: pattern_list.extend(self._restriction) # This is only used as a hash key in the self._hosts_patterns_cache dict # a tuple is faster than stringifying pattern_hash = tuple(pattern_list) if pattern_hash not in self._hosts_patterns_cache: patterns = split_host_pattern(pattern) hosts = self._evaluate_patterns(patterns) # mainly useful for hostvars[host] access if not ignore_limits and self._subset: # exclude hosts not in a subset, if defined subset_uuids = set(s._uuid for s in self._evaluate_patterns(self._subset)) hosts = [h for h in hosts if h._uuid in subset_uuids] if not ignore_restrictions and self._restriction: # exclude hosts mentioned in any restriction (ex: failed hosts) hosts = [h for h in hosts if h.name in self._restriction] self._hosts_patterns_cache[pattern_hash] = deduplicate_list(hosts) # sort hosts list if needed (should only happen when called from strategy) if order in ['sorted', 'reverse_sorted']: hosts = sorted(self._hosts_patterns_cache[pattern_hash][:], key=attrgetter('name'), reverse=(order == 'reverse_sorted')) elif order == 'reverse_inventory': hosts = self._hosts_patterns_cache[pattern_hash][::-1] else: hosts = self._hosts_patterns_cache[pattern_hash][:] if order == 'shuffle': shuffle(hosts) elif order not in [None, 'inventory']: raise AnsibleOptionsError("Invalid 'order' specified for inventory hosts: %s" % order) return hosts def _evaluate_patterns(self, patterns): """ Takes a list of patterns and returns a list of matching host names, taking into account any negative and intersection patterns. """ patterns = order_patterns(patterns) hosts = [] for p in patterns: # avoid resolving a pattern that is a plain host if p in self._inventory.hosts: hosts.append(self._inventory.get_host(p)) else: that = self._match_one_pattern(p) if p[0] == "!": that = set(that) hosts = [h for h in hosts if h not in that] elif p[0] == "&": that = set(that) hosts = [h for h in hosts if h in that] else: existing_hosts = set(y.name for y in hosts) hosts.extend([h for h in that if h.name not in existing_hosts]) return hosts def _match_one_pattern(self, pattern): """ Takes a single pattern and returns a list of matching host names. Ignores intersection (&) and exclusion (!) specifiers. The pattern may be: 1. A regex starting with ~, e.g. '~[abc]*' 2. A shell glob pattern with ?/*/[chars]/[!chars], e.g. 'foo*' 3. An ordinary word that matches itself only, e.g. 'foo' The pattern is matched using the following rules: 1. If it's 'all', it matches all hosts in all groups. 2. Otherwise, for each known group name: (a) if it matches the group name, the results include all hosts in the group or any of its children. (b) otherwise, if it matches any hosts in the group, the results include the matching hosts. This means that 'foo*' may match one or more groups (thus including all hosts therein) but also hosts in other groups. The built-in groups 'all' and 'ungrouped' are special. No pattern can match these group names (though 'all' behaves as though it matches, as described above). The word 'ungrouped' can match a host of that name, and patterns like 'ungr*' and 'al*' can match either hosts or groups other than all and ungrouped. If the pattern matches one or more group names according to these rules, it may have an optional range suffix to select a subset of the results. This is allowed only if the pattern is not a regex, i.e. '~foo[1]' does not work (the [1] is interpreted as part of the regex), but 'foo*[1]' would work if 'foo*' matched the name of one or more groups. Duplicate matches are always eliminated from the results. """ if pattern[0] in ("&", "!"): pattern = pattern[1:] if pattern not in self._pattern_cache: (expr, slice) = self._split_subscript(pattern) hosts = self._enumerate_matches(expr) try: hosts = self._apply_subscript(hosts, slice) except IndexError: raise AnsibleError("No hosts matched the subscripted pattern '%s'" % pattern) self._pattern_cache[pattern] = hosts return self._pattern_cache[pattern] def _split_subscript(self, pattern): """ Takes a pattern, checks if it has a subscript, and returns the pattern without the subscript and a (start,end) tuple representing the given subscript (or None if there is no subscript). Validates that the subscript is in the right syntax, but doesn't make sure the actual indices make sense in context. """ # Do not parse regexes for enumeration info if pattern[0] == '~': return (pattern, None) # We want a pattern followed by an integer or range subscript. # (We can't be more restrictive about the expression because the # fnmatch semantics permit [\[:\]] to occur.) subscript = None m = PATTERN_WITH_SUBSCRIPT.match(pattern) if m: (pattern, idx, start, sep, end) = m.groups() if idx: subscript = (int(idx), None) else: if not end: end = -1 subscript = (int(start), int(end)) if sep == '-': display.warning("Use [x:y] inclusive subscripts instead of [x-y] which has been removed") return (pattern, subscript) def _apply_subscript(self, hosts, subscript): """ Takes a list of hosts and a (start,end) tuple and returns the subset of hosts based on the subscript (which may be None to return all hosts). """ if not hosts or not subscript: return hosts (start, end) = subscript if end: if end == -1: end = len(hosts) - 1 return hosts[start:end + 1] else: return [hosts[start]] def _enumerate_matches(self, pattern): """ Returns a list of host names matching the given pattern according to the rules explained above in _match_one_pattern. """ results = [] # check if pattern matches group matching_groups = self._match_list(self._inventory.groups, pattern) if matching_groups: for groupname in matching_groups: results.extend(self._inventory.groups[groupname].get_hosts()) # check hosts if no groups matched or it is a regex/glob pattern if not matching_groups or pattern[0] == '~' or any(special in pattern for special in ('.', '?', '*', '[')): # pattern might match host matching_hosts = self._match_list(self._inventory.hosts, pattern) if matching_hosts: for hostname in matching_hosts: results.append(self._inventory.hosts[hostname]) if not results and pattern in C.LOCALHOST: # get_host autocreates implicit when needed implicit = self._inventory.get_host(pattern) if implicit: results.append(implicit) # Display warning if specified host pattern did not match any groups or hosts if not results and not matching_groups and pattern != 'all': msg = "Could not match supplied host pattern, ignoring: %s" % pattern display.debug(msg) if C.HOST_PATTERN_MISMATCH == 'warning': display.warning(msg) elif C.HOST_PATTERN_MISMATCH == 'error': raise AnsibleError(msg) # no need to write 'ignore' state return results def list_hosts(self, pattern="all"): """ return a list of hostnames for a pattern """ # FIXME: cache? result = self.get_hosts(pattern) # allow implicit localhost if pattern matches and no other results if len(result) == 0 and pattern in C.LOCALHOST: result = [pattern] return result def list_groups(self): # FIXME: cache? return sorted(self._inventory.groups.keys()) def restrict_to_hosts(self, restriction): """ Restrict list operations to the hosts given in restriction. This is used to batch serial operations in main playbook code, don't use this for other reasons. """ if restriction is None: return elif not isinstance(restriction, list): restriction = [restriction] self._restriction = set(to_text(h.name) for h in restriction) def subset(self, subset_pattern): """ Limits inventory results to a subset of inventory that matches a given pattern, such as to select a given geographic of numeric slice amongst a previous 'hosts' selection that only select roles, or vice versa. Corresponds to --limit parameter to ansible-playbook """ if subset_pattern is None: self._subset = None else: subset_patterns = split_host_pattern(subset_pattern) results = [] # allow Unix style @filename data for x in subset_patterns: if not x: continue if x[0] == "@": b_limit_file = to_bytes(x[1:]) if not os.path.exists(b_limit_file): raise AnsibleError(u'Unable to find limit file %s' % b_limit_file) if not os.path.isfile(b_limit_file): raise AnsibleError(u'Limit starting with "@" must be a file, not a directory: %s' % b_limit_file) with open(b_limit_file) as fd: results.extend([to_text(l.strip()) for l in fd.read().split("\n")]) else: results.append(to_text(x)) self._subset = results def remove_restriction(self): """ Do not restrict list operations """ self._restriction = None def clear_pattern_cache(self): self._pattern_cache = {}
closed
ansible/ansible
https://github.com/ansible/ansible
77,267
unarchive: io_buffer_size does not work
### Summary The option `io_buffer_size` added in ansible-core 2.12 (#74094) does not appear in the argument spec and thus does not work. (As opposed to the other two documented options that are not in the argument spec, this one is not handled by the accomodating action plugin.) ### Issue Type Bug Report ### Component Name unarchive ### Ansible Version ```console 2.12+ ``` ### Configuration ```console - ``` ### OS / Environment - ### Steps to Reproduce - ### Expected Results - ### Actual Results ```console Argument spec validation will reject this option. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/77267
https://github.com/ansible/ansible/pull/77271
c555ce1bd90e95cca0d26259de27399d9bf18da4
e3c72230cda45798b4d9bd98c7f296d2895c4027
2022-03-11T21:08:12Z
python
2022-03-17T19:15:13Z
changelogs/fragments/77271-unarchive.yml