status
stringclasses
1 value
repo_name
stringclasses
31 values
repo_url
stringclasses
31 values
issue_id
int64
1
104k
title
stringlengths
4
369
body
stringlengths
0
254k
issue_url
stringlengths
37
56
pull_url
stringlengths
37
54
before_fix_sha
stringlengths
40
40
after_fix_sha
stringlengths
40
40
report_datetime
timestamp[us, tz=UTC]
language
stringclasses
5 values
commit_datetime
timestamp[us, tz=UTC]
updated_file
stringlengths
4
188
file_content
stringlengths
0
5.12M
closed
ansible/ansible
https://github.com/ansible/ansible
64,384
ansible_default_ipv4.broadcast contains global instead of the broadcast address
##### SUMMARY The `broadcast` address contains `global` in a Debian container running on Fedora. I could not reproduce this when the Ansible controller runs on Mac OS X. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME setup ##### ANSIBLE VERSION ``` ansible 2.8.4 config file = /Users/yf30lg/.ansible.cfg configured module search path = [u'/Users/yf30lg/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /Library/Python/2.7/site-packages/ansible executable location = /usr/local/bin/ansible python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)] ``` ##### CONFIGURATION ``` # no output ``` ##### OS / ENVIRONMENT Controller: Fedora 31 Targets: - Debian stable (container debian:stable) - Fedora 31 (container fedora:lastest) ##### STEPS TO REPRODUCE Prepare the container: ``` docker run -ti debian:stable /bin/bash apt-get update apt-get install -y python ``` Get the facts: ``` ansible -m setup -i $(docker ps -ql), -c docker all ``` ##### EXPECTED RESULTS I was hoping to get the broadcast address back, instead of a word `global`. ##### ACTUAL RESULTS ``` ... "ansible_default_ipv4": { ... "broadcast": "global", ... ```
https://github.com/ansible/ansible/issues/64384
https://github.com/ansible/ansible/pull/64528
c4f442ed5a1f10ae06c56f78cb1c0ea6c0c7db20
e6bf20273808642ec58b4dd2a765cd7e5b25f48e
2019-11-04T12:44:20Z
python
2020-07-30T17:40:14Z
test/integration/targets/facts_linux_network/aliases
closed
ansible/ansible
https://github.com/ansible/ansible
64,384
ansible_default_ipv4.broadcast contains global instead of the broadcast address
##### SUMMARY The `broadcast` address contains `global` in a Debian container running on Fedora. I could not reproduce this when the Ansible controller runs on Mac OS X. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME setup ##### ANSIBLE VERSION ``` ansible 2.8.4 config file = /Users/yf30lg/.ansible.cfg configured module search path = [u'/Users/yf30lg/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /Library/Python/2.7/site-packages/ansible executable location = /usr/local/bin/ansible python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)] ``` ##### CONFIGURATION ``` # no output ``` ##### OS / ENVIRONMENT Controller: Fedora 31 Targets: - Debian stable (container debian:stable) - Fedora 31 (container fedora:lastest) ##### STEPS TO REPRODUCE Prepare the container: ``` docker run -ti debian:stable /bin/bash apt-get update apt-get install -y python ``` Get the facts: ``` ansible -m setup -i $(docker ps -ql), -c docker all ``` ##### EXPECTED RESULTS I was hoping to get the broadcast address back, instead of a word `global`. ##### ACTUAL RESULTS ``` ... "ansible_default_ipv4": { ... "broadcast": "global", ... ```
https://github.com/ansible/ansible/issues/64384
https://github.com/ansible/ansible/pull/64528
c4f442ed5a1f10ae06c56f78cb1c0ea6c0c7db20
e6bf20273808642ec58b4dd2a765cd7e5b25f48e
2019-11-04T12:44:20Z
python
2020-07-30T17:40:14Z
test/integration/targets/facts_linux_network/meta/main.yml
closed
ansible/ansible
https://github.com/ansible/ansible
64,384
ansible_default_ipv4.broadcast contains global instead of the broadcast address
##### SUMMARY The `broadcast` address contains `global` in a Debian container running on Fedora. I could not reproduce this when the Ansible controller runs on Mac OS X. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME setup ##### ANSIBLE VERSION ``` ansible 2.8.4 config file = /Users/yf30lg/.ansible.cfg configured module search path = [u'/Users/yf30lg/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /Library/Python/2.7/site-packages/ansible executable location = /usr/local/bin/ansible python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)] ``` ##### CONFIGURATION ``` # no output ``` ##### OS / ENVIRONMENT Controller: Fedora 31 Targets: - Debian stable (container debian:stable) - Fedora 31 (container fedora:lastest) ##### STEPS TO REPRODUCE Prepare the container: ``` docker run -ti debian:stable /bin/bash apt-get update apt-get install -y python ``` Get the facts: ``` ansible -m setup -i $(docker ps -ql), -c docker all ``` ##### EXPECTED RESULTS I was hoping to get the broadcast address back, instead of a word `global`. ##### ACTUAL RESULTS ``` ... "ansible_default_ipv4": { ... "broadcast": "global", ... ```
https://github.com/ansible/ansible/issues/64384
https://github.com/ansible/ansible/pull/64528
c4f442ed5a1f10ae06c56f78cb1c0ea6c0c7db20
e6bf20273808642ec58b4dd2a765cd7e5b25f48e
2019-11-04T12:44:20Z
python
2020-07-30T17:40:14Z
test/integration/targets/facts_linux_network/tasks/main.yml
closed
ansible/ansible
https://github.com/ansible/ansible
70,267
Edit on Github not working in the new world order
<!--- Verify first that your improvement is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below, add suggestions to wording or structure --> Edit on GitHub has major problems: - Doesn't work on all the older releases, since it points to devel and most of the modules are no longer in devel - doesn't work for modules still in ansible-base as per https://github.com/ansible/ansible/issues/69683 - May be difficult to get to work on devel once the docs pipeline is in place to bring back all the module docs from their respective collections. In general, we may have to turn it off for modules entirely, but this is the issue to track how we should approach this problem. <!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? --> ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME <!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure --> docs.ansible.com ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below 2.10 ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. OS version, browser, etc. --> ##### ADDITIONAL INFORMATION <!--- Describe how this improves the documentation, e.g. before/after situation or screenshots --> <!--- HINT: You can paste gist.github.com links for larger files -->
https://github.com/ansible/ansible/issues/70267
https://github.com/ansible/ansible/pull/70951
ac5f3f8befaff088450c97f783cb145519b8f6bf
e28b20d72930ad8c0a359de05e05726090165fda
2020-06-24T18:39:20Z
python
2020-07-30T19:43:07Z
docs/docsite/_themes/sphinx_rtd_theme/breadcrumbs.html
{# Support for Sphinx 1.3+ page_source_suffix, but don't break old builds. #} {% if page_source_suffix %} {% set suffix = page_source_suffix %} {% else %} {% set suffix = source_suffix %} {% endif %} {% if meta is defined and meta is not none %} {% set check_meta = True %} {% else %} {% set check_meta = False %} {% endif %} {% if check_meta and 'github_url' in meta %} {% set display_github = True %} {% endif %} {% if check_meta and 'bitbucket_url' in meta %} {% set display_bitbucket = True %} {% endif %} {% if check_meta and 'gitlab_url' in meta %} {% set display_gitlab = True %} {% endif %} <div role="navigation" aria-label="breadcrumbs navigation"> <ul class="wy-breadcrumbs"> {% block breadcrumbs %} <li><a href="{{ pathto(master_doc) }}">{{ _('Docs') }}</a> &raquo;</li> {% for doc in parents %} <li><a href="{{ doc.link|e }}">{{ doc.title }}</a> &raquo;</li> {% endfor %} <li>{{ title }}</li> {% endblock %} {% block breadcrumbs_aside %} <li class="wy-breadcrumbs-aside"> {% if hasdoc(pagename) %} {% if display_github %} {% if check_meta and 'github_url' in meta %} <!-- User defined GitHub URL --> <a href="{{ meta['github_url'] }}" class="fa fa-github"> {{ _('Edit on GitHub') }}</a> {% else %} <!-- Ansible-specific additions for modules etc --> {% if pagename.endswith('_module') %} <a href="https://{{ github_host|default("github.com") }}/{{ github_user }}/{{ github_repo }}/{{ theme_vcs_pageview_mode|default("blob") }}/{{ github_module_version }}{{ meta.get('source', '') }}?description=%23%23%23%23%23%20SUMMARY%0A%3C!---%20Your%20description%20here%20--%3E%0A%0A%0A%23%23%23%23%23%20ISSUE%20TYPE%0A-%20Docs%20Pull%20Request%0A%0A%2Blabel:%20docsite_pr" class="fa fa-github"> {{ _('Edit on GitHub') }}</a> {% elif pagename.startswith('plugins') and meta.get('source', None) %} <a href="https://{{ github_host|default("github.com") }}/{{ github_user }}/{{ github_repo }}/{{ theme_vcs_pageview_mode|default("blob") }}/{{ github_root_dir }}/{{ pagename }}.py?description=%23%23%23%23%23%20SUMMARY%0A%3C!---%20Your%20description%20here%20--%3E%0A%0A%0A%23%23%23%23%23%20ISSUE%20TYPE%0A-%20Docs%20Pull%20Request%0A%0A%2Blabel:%20docsite_pr" class="fa fa-github"> {{ _('Edit on GitHub') }}</a> {% elif pagename.startswith('cli') and meta.get('source', None) %} <a href="https://{{ github_host|default("github.com") }}/{{ github_user }}/{{ github_repo }}/{{ theme_vcs_pageview_mode|default("blob") }}/{{ github_cli_version }}{{ meta.get('source', '') }}?description=%23%23%23%23%23%20SUMMARY%0A%3C!---%20Your%20description%20here%20--%3E%0A%0A%0A%23%23%23%23%23%20ISSUE%20TYPE%0A-%20Docs%20Pull%20Request%0A%0A%2Blabel:%20docsite_pr" class="fa fa-github"> {{ _('Edit on GitHub') }}</a> {% elif (not 'list_of' in pagename) and (not 'category' in pagename) %} <a href="https://{{ github_host|default("github.com") }}/{{ github_user }}/{{ github_repo }}/{{ theme_vcs_pageview_mode|default("blob") }}/{{ github_version }}{{ conf_py_path }}{{ pagename }}{{ suffix }}?description=%23%23%23%23%23%20SUMMARY%0A%3C!---%20Your%20description%20here%20--%3E%0A%0A%0A%23%23%23%23%23%20ISSUE%20TYPE%0A-%20Docs%20Pull%20Request%0A%0A%2Blabel:%20docsite_pr" class="fa fa-github"> {{ _('Edit on GitHub') }}</a> {% endif %} {% endif %} {% elif display_bitbucket %} {% if check_meta and 'bitbucket_url' in meta %} <!-- User defined Bitbucket URL --> <a href="{{ meta['bitbucket_url'] }}" class="fa fa-bitbucket"> {{ _('Edit on Bitbucket') }}</a> {% else %} <a href="https://bitbucket.org/{{ bitbucket_user }}/{{ bitbucket_repo }}/src/{{ bitbucket_version}}{{ conf_py_path }}{{ pagename }}{{ suffix }}?mode={{ theme_vcs_pageview_mode|default("view") }}" class="fa fa-bitbucket"> {{ _('Edit on Bitbucket') }}</a> {% endif %} {% elif display_gitlab %} {% if check_meta and 'gitlab_url' in meta %} <!-- User defined GitLab URL --> <a href="{{ meta['gitlab_url'] }}" class="fa fa-gitlab"> {{ _('Edit on GitLab') }}</a> {% else %} <a href="https://{{ gitlab_host|default("gitlab.com") }}/{{ gitlab_user }}/{{ gitlab_repo }}/{{ theme_vcs_pageview_mode|default("blob") }}/{{ gitlab_version }}{{ conf_py_path }}{{ pagename }}{{ suffix }}" class="fa fa-gitlab"> {{ _('Edit on GitLab') }}</a> {% endif %} {% elif show_source and source_url_prefix %} <a href="{{ source_url_prefix }}{{ pagename }}{{ suffix }}">{{ _('View page source') }}</a> {% elif show_source and has_source and sourcename %} <a href="{{ pathto('_sources/' + sourcename, true)|e }}" rel="nofollow"> {{ _('View page source') }}</a> {% endif %} {% endif %} </li> {% endblock %} </ul> {% if (theme_prev_next_buttons_location == 'top' or theme_prev_next_buttons_location == 'both') and (next or prev) %} <div class="rst-breadcrumbs-buttons" role="navigation" aria-label="breadcrumb navigation"> {% if next %} <a href="{{ next.link|e }}" class="btn btn-neutral float-right" title="{{ next.title|striptags|e }}" accesskey="n">Next <span class="fa fa-arrow-circle-right"></span></a> {% endif %} {% if prev %} <a href="{{ prev.link|e }}" class="btn btn-neutral float-left" title="{{ prev.title|striptags|e }}" accesskey="p"><span class="fa fa-arrow-circle-left"></span> Previous</a> {% endif %} </div> {% endif %} <hr/> </div>
closed
ansible/ansible
https://github.com/ansible/ansible
70,984
[template] unhelpful error message for `x in y` if y undefined
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY I'm using the `template` module to generate some templates. If I say `x in y` and `y` is undefined, the error message is not at all specific. It prints out the entire contents of a file, possibly the wrong one, and doesn't even name which variable is undefined. Note that I expect this is not the right github repo to lodge this issue, since modules have been split into different repos. But I have no idea how to figure out which repo is the right repo. This itself is a documentation issue, #67939 . ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME template ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible 2.9.0 config file = /home/ec2-user/environment/repo/workload/ansible.cfg configured module search path = ['/home/ec2-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/ec2-user/.local/lib/python3.6/site-packages/ansible executable location = /home/ec2-user/.local/bin/ansible python version = 3.6.10 (default, Feb 10 2020, 19:55:14) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ``` ANSIBLE_PIPELINING(/home/ec2-user/environment/repo/workload/ansible.cfg) = True ANY_ERRORS_FATAL(/home/ec2-user/environment/repo/workload/ansible.cfg) = True CACHE_PLUGIN(/home/ec2-user/environment/repo/workload/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/home/ec2-user/environment/repo/workload/ansible.cfg) = ~/.ansible-facts-cache CACHE_PLUGIN_TIMEOUT(/home/ec2-user/environment/repo/workload/ansible.cfg) = 7200 DEFAULT_FORKS(/home/ec2-user/environment/repo/workload/ansible.cfg) = 50 DEFAULT_GATHERING(/home/ec2-user/environment/repo/workload/ansible.cfg) = no INVENTORY_UNPARSED_IS_FAILED(/home/ec2-user/environment/repo/workload/ansible.cfg) = True ``` ##### OS / ENVIRONMENT Amazon Linux. Python Jinja module 2.10.1 ##### STEPS TO REPRODUCE `playbook.yaml` ``` --- - hosts: localhost connection: local vars: x: 1 tasks: - template: src: "parent.j2" dest: "/tmp/out/parent-out.txt" trim_blocks: False ``` `parent.j2` ``` Here is the parent x = {{ x }} {% include 'child.j2' %} Back to the parent ``` `child.j2` ``` Here is the child x = {{ x }} {% if z in y %} {# this line will fail, because y is not defined #} {{ z }} {% endif %} ``` `ansible-playbook playbook.yaml` ##### EXPECTED RESULTS The same thing as when variables are undefined in other ways. ``` TASK [template] ************************************************************************************************************************************* fatal: [localhost]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'y' is undefined"} ``` Additionally the line number and file of the child template would be great. But at the very least I want the name of the undefined variable. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ``` fatal: [localhost]: FAILED! => changed=false msg: |- AnsibleUndefinedVariable: Unable to look up a name or access an attribute in template string (Here is the parent x = {{ x }} {% include 'child.j2' %} Back to the parent). Make sure your variable name does not contain invalid characters like '-': argument of type 'AnsibleUndefined' is not iterable ``` Note that: * the name of the undefined variable does not appear in the error message * the error message contains the contents of the `parent.j2` file, which had no errors in it, and no undefined variables in it * the error message does not contain the content or path or the file `child.j2` which had the error * if the `parent.j2` file is long, this error message can be very long, so it's not even clear which task failed until you scroll up several pages The only way to figure out what went wrong is to use a binary search, deleting half of `child.j2` at a time. Think of the case where you have a working template that uses a variable multiple times in the one template. Then you rename it elsewhere, and forget to rename it in the template. If you try to use the binary search method to find the error, you won't have any luck, because you'll end up in a situation where both halves have the same error. Note that you get the error without nested templating, if you have `{{ x in y }}` in the parent, and no `include`. So this bug isn't caused by nesting, it's just exacerbated, because when the undefined variable is in a nested template, the error message contains the wrong file contents entirely.
https://github.com/ansible/ansible/issues/70984
https://github.com/ansible/ansible/pull/70990
d62dffafb3671dfc331eef6a847dde05b24a73d0
bf7276a4e88de6e102ad06aa1d0716ae799d87ea
2020-07-30T02:12:28Z
python
2020-07-30T19:57:01Z
changelogs/fragments/70984-templating-ansibleundefined-in-operator.yml
closed
ansible/ansible
https://github.com/ansible/ansible
70,984
[template] unhelpful error message for `x in y` if y undefined
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY I'm using the `template` module to generate some templates. If I say `x in y` and `y` is undefined, the error message is not at all specific. It prints out the entire contents of a file, possibly the wrong one, and doesn't even name which variable is undefined. Note that I expect this is not the right github repo to lodge this issue, since modules have been split into different repos. But I have no idea how to figure out which repo is the right repo. This itself is a documentation issue, #67939 . ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME template ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible 2.9.0 config file = /home/ec2-user/environment/repo/workload/ansible.cfg configured module search path = ['/home/ec2-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/ec2-user/.local/lib/python3.6/site-packages/ansible executable location = /home/ec2-user/.local/bin/ansible python version = 3.6.10 (default, Feb 10 2020, 19:55:14) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ``` ANSIBLE_PIPELINING(/home/ec2-user/environment/repo/workload/ansible.cfg) = True ANY_ERRORS_FATAL(/home/ec2-user/environment/repo/workload/ansible.cfg) = True CACHE_PLUGIN(/home/ec2-user/environment/repo/workload/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/home/ec2-user/environment/repo/workload/ansible.cfg) = ~/.ansible-facts-cache CACHE_PLUGIN_TIMEOUT(/home/ec2-user/environment/repo/workload/ansible.cfg) = 7200 DEFAULT_FORKS(/home/ec2-user/environment/repo/workload/ansible.cfg) = 50 DEFAULT_GATHERING(/home/ec2-user/environment/repo/workload/ansible.cfg) = no INVENTORY_UNPARSED_IS_FAILED(/home/ec2-user/environment/repo/workload/ansible.cfg) = True ``` ##### OS / ENVIRONMENT Amazon Linux. Python Jinja module 2.10.1 ##### STEPS TO REPRODUCE `playbook.yaml` ``` --- - hosts: localhost connection: local vars: x: 1 tasks: - template: src: "parent.j2" dest: "/tmp/out/parent-out.txt" trim_blocks: False ``` `parent.j2` ``` Here is the parent x = {{ x }} {% include 'child.j2' %} Back to the parent ``` `child.j2` ``` Here is the child x = {{ x }} {% if z in y %} {# this line will fail, because y is not defined #} {{ z }} {% endif %} ``` `ansible-playbook playbook.yaml` ##### EXPECTED RESULTS The same thing as when variables are undefined in other ways. ``` TASK [template] ************************************************************************************************************************************* fatal: [localhost]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'y' is undefined"} ``` Additionally the line number and file of the child template would be great. But at the very least I want the name of the undefined variable. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ``` fatal: [localhost]: FAILED! => changed=false msg: |- AnsibleUndefinedVariable: Unable to look up a name or access an attribute in template string (Here is the parent x = {{ x }} {% include 'child.j2' %} Back to the parent). Make sure your variable name does not contain invalid characters like '-': argument of type 'AnsibleUndefined' is not iterable ``` Note that: * the name of the undefined variable does not appear in the error message * the error message contains the contents of the `parent.j2` file, which had no errors in it, and no undefined variables in it * the error message does not contain the content or path or the file `child.j2` which had the error * if the `parent.j2` file is long, this error message can be very long, so it's not even clear which task failed until you scroll up several pages The only way to figure out what went wrong is to use a binary search, deleting half of `child.j2` at a time. Think of the case where you have a working template that uses a variable multiple times in the one template. Then you rename it elsewhere, and forget to rename it in the template. If you try to use the binary search method to find the error, you won't have any luck, because you'll end up in a situation where both halves have the same error. Note that you get the error without nested templating, if you have `{{ x in y }}` in the parent, and no `include`. So this bug isn't caused by nesting, it's just exacerbated, because when the undefined variable is in a nested template, the error message contains the wrong file contents entirely.
https://github.com/ansible/ansible/issues/70984
https://github.com/ansible/ansible/pull/70990
d62dffafb3671dfc331eef6a847dde05b24a73d0
bf7276a4e88de6e102ad06aa1d0716ae799d87ea
2020-07-30T02:12:28Z
python
2020-07-30T19:57:01Z
lib/ansible/template/__init__.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import ast import datetime import os import pkgutil import pwd import re import time from contextlib import contextmanager from distutils.version import LooseVersion from numbers import Number from traceback import format_exc try: from hashlib import sha1 except ImportError: from sha import sha as sha1 from jinja2.exceptions import TemplateSyntaxError, UndefinedError from jinja2.loaders import FileSystemLoader from jinja2.runtime import Context, StrictUndefined from ansible import constants as C from ansible.errors import AnsibleError, AnsibleFilterError, AnsiblePluginRemovedError, AnsibleUndefinedVariable, AnsibleAssertionError from ansible.module_utils.six import iteritems, string_types, text_type from ansible.module_utils.six.moves import range from ansible.module_utils._text import to_native, to_text, to_bytes from ansible.module_utils.common._collections_compat import Iterator, Sequence, Mapping, MappingView, MutableMapping from ansible.module_utils.common.collections import is_sequence from ansible.module_utils.compat.importlib import import_module from ansible.plugins.loader import filter_loader, lookup_loader, test_loader from ansible.template.safe_eval import safe_eval from ansible.template.template import AnsibleJ2Template from ansible.template.vars import AnsibleJ2Vars from ansible.utils.collection_loader import AnsibleCollectionRef from ansible.utils.display import Display from ansible.utils.collection_loader._collection_finder import _get_collection_metadata from ansible.utils.unsafe_proxy import wrap_var display = Display() __all__ = ['Templar', 'generate_ansible_template_vars'] # A regex for checking to see if a variable we're trying to # expand is just a single variable name. # Primitive Types which we don't want Jinja to convert to strings. NON_TEMPLATED_TYPES = (bool, Number) JINJA2_OVERRIDE = '#jinja2:' from jinja2 import __version__ as j2_version USE_JINJA2_NATIVE = False if C.DEFAULT_JINJA2_NATIVE: try: from jinja2.nativetypes import NativeEnvironment as Environment from ansible.template.native_helpers import ansible_native_concat as j2_concat USE_JINJA2_NATIVE = True except ImportError: from jinja2 import Environment from jinja2.utils import concat as j2_concat display.warning( 'jinja2_native requires Jinja 2.10 and above. ' 'Version detected: %s. Falling back to default.' % j2_version ) else: from jinja2 import Environment from jinja2.utils import concat as j2_concat JINJA2_BEGIN_TOKENS = frozenset(('variable_begin', 'block_begin', 'comment_begin', 'raw_begin')) JINJA2_END_TOKENS = frozenset(('variable_end', 'block_end', 'comment_end', 'raw_end')) RANGE_TYPE = type(range(0)) def generate_ansible_template_vars(path, dest_path=None): b_path = to_bytes(path) try: template_uid = pwd.getpwuid(os.stat(b_path).st_uid).pw_name except (KeyError, TypeError): template_uid = os.stat(b_path).st_uid temp_vars = { 'template_host': to_text(os.uname()[1]), 'template_path': path, 'template_mtime': datetime.datetime.fromtimestamp(os.path.getmtime(b_path)), 'template_uid': to_text(template_uid), 'template_fullpath': os.path.abspath(path), 'template_run_date': datetime.datetime.now(), 'template_destpath': to_native(dest_path) if dest_path else None, } managed_default = C.DEFAULT_MANAGED_STR managed_str = managed_default.format( host=temp_vars['template_host'], uid=temp_vars['template_uid'], file=temp_vars['template_path'], ) temp_vars['ansible_managed'] = to_text(time.strftime(to_native(managed_str), time.localtime(os.path.getmtime(b_path)))) return temp_vars def _escape_backslashes(data, jinja_env): """Double backslashes within jinja2 expressions A user may enter something like this in a playbook:: debug: msg: "Test Case 1\\3; {{ test1_name | regex_replace('^(.*)_name$', '\\1')}}" The string inside of the {{ gets interpreted multiple times First by yaml. Then by python. And finally by jinja2 as part of it's variable. Because it is processed by both python and jinja2, the backslash escaped characters get unescaped twice. This means that we'd normally have to use four backslashes to escape that. This is painful for playbook authors as they have to remember different rules for inside vs outside of a jinja2 expression (The backslashes outside of the "{{ }}" only get processed by yaml and python. So they only need to be escaped once). The following code fixes this by automatically performing the extra quoting of backslashes inside of a jinja2 expression. """ if '\\' in data and '{{' in data: new_data = [] d2 = jinja_env.preprocess(data) in_var = False for token in jinja_env.lex(d2): if token[1] == 'variable_begin': in_var = True new_data.append(token[2]) elif token[1] == 'variable_end': in_var = False new_data.append(token[2]) elif in_var and token[1] == 'string': # Double backslashes only if we're inside of a jinja2 variable new_data.append(token[2].replace('\\', '\\\\')) else: new_data.append(token[2]) data = ''.join(new_data) return data def is_template(data, jinja_env): """This function attempts to quickly detect whether a value is a jinja2 template. To do so, we look for the first 2 matching jinja2 tokens for start and end delimiters. """ found = None start = True comment = False d2 = jinja_env.preprocess(data) # This wraps a lot of code, but this is due to lex returing a generator # so we may get an exception at any part of the loop try: for token in jinja_env.lex(d2): if token[1] in JINJA2_BEGIN_TOKENS: if start and token[1] == 'comment_begin': # Comments can wrap other token types comment = True start = False # Example: variable_end -> variable found = token[1].split('_')[0] elif token[1] in JINJA2_END_TOKENS: if token[1].split('_')[0] == found: return True elif comment: continue return False except TemplateSyntaxError: return False return False def _count_newlines_from_end(in_str): ''' Counts the number of newlines at the end of a string. This is used during the jinja2 templating to ensure the count matches the input, since some newlines may be thrown away during the templating. ''' try: i = len(in_str) j = i - 1 while in_str[j] == '\n': j -= 1 return i - 1 - j except IndexError: # Uncommon cases: zero length string and string containing only newlines return i def recursive_check_defined(item): from jinja2.runtime import Undefined if isinstance(item, MutableMapping): for key in item: recursive_check_defined(item[key]) elif isinstance(item, list): for i in item: recursive_check_defined(i) else: if isinstance(item, Undefined): raise AnsibleFilterError("{0} is undefined".format(item)) def _is_rolled(value): """Helper method to determine if something is an unrolled generator, iterator, or similar object """ return ( isinstance(value, Iterator) or isinstance(value, MappingView) or isinstance(value, RANGE_TYPE) ) def _unroll_iterator(func): """Wrapper function, that intercepts the result of a filter and auto unrolls a generator, so that users are not required to explicitly use ``|list`` to unroll. """ def wrapper(*args, **kwargs): ret = func(*args, **kwargs) if _is_rolled(ret): return list(ret) return ret # This code is duplicated from ``functools.update_wrapper`` from Py3.7. # ``functools.update_wrapper`` was failing when the func was ``functools.partial`` for attr in ('__module__', '__name__', '__qualname__', '__doc__', '__annotations__'): try: value = getattr(func, attr) except AttributeError: pass else: setattr(wrapper, attr, value) for attr in ('__dict__',): getattr(wrapper, attr).update(getattr(func, attr, {})) wrapper.__wrapped__ = func return wrapper class AnsibleUndefined(StrictUndefined): ''' A custom Undefined class, which returns further Undefined objects on access, rather than throwing an exception. ''' def __getattr__(self, name): if name == '__UNSAFE__': # AnsibleUndefined should never be assumed to be unsafe # This prevents ``hasattr(val, '__UNSAFE__')`` from evaluating to ``True`` raise AttributeError(name) # Return original Undefined object to preserve the first failure context return self def __getitem__(self, key): # Return original Undefined object to preserve the first failure context return self def __repr__(self): return 'AnsibleUndefined' class AnsibleContext(Context): ''' A custom context, which intercepts resolve() calls and sets a flag internally if any variable lookup returns an AnsibleUnsafe value. This flag is checked post-templating, and (when set) will result in the final templated result being wrapped in AnsibleUnsafe. ''' def __init__(self, *args, **kwargs): super(AnsibleContext, self).__init__(*args, **kwargs) self.unsafe = False def _is_unsafe(self, val): ''' Our helper function, which will also recursively check dict and list entries due to the fact that they may be repr'd and contain a key or value which contains jinja2 syntax and would otherwise lose the AnsibleUnsafe value. ''' if isinstance(val, dict): for key in val.keys(): if self._is_unsafe(val[key]): return True elif isinstance(val, list): for item in val: if self._is_unsafe(item): return True elif getattr(val, '__UNSAFE__', False) is True: return True return False def _update_unsafe(self, val): if val is not None and not self.unsafe and self._is_unsafe(val): self.unsafe = True def resolve(self, key): ''' The intercepted resolve(), which uses the helper above to set the internal flag whenever an unsafe variable value is returned. ''' val = super(AnsibleContext, self).resolve(key) self._update_unsafe(val) return val def resolve_or_missing(self, key): val = super(AnsibleContext, self).resolve_or_missing(key) self._update_unsafe(val) return val def get_all(self): """Return the complete context as a dict including the exported variables. For optimizations reasons this might not return an actual copy so be careful with using it. This is to prevent from running ``AnsibleJ2Vars`` through dict(): ``dict(self.parent, **self.vars)`` In Ansible this means that ALL variables would be templated in the process of re-creating the parent because ``AnsibleJ2Vars`` templates each variable in its ``__getitem__`` method. Instead we re-create the parent via ``AnsibleJ2Vars.add_locals`` that creates a new ``AnsibleJ2Vars`` copy without templating each variable. This will prevent unnecessarily templating unused variables in cases like setting a local variable and passing it to {% include %} in a template. Also see ``AnsibleJ2Template``and https://github.com/pallets/jinja/commit/d67f0fd4cc2a4af08f51f4466150d49da7798729 """ if LooseVersion(j2_version) >= LooseVersion('2.9'): if not self.vars: return self.parent if not self.parent: return self.vars if isinstance(self.parent, AnsibleJ2Vars): return self.parent.add_locals(self.vars) else: # can this happen in Ansible? return dict(self.parent, **self.vars) class JinjaPluginIntercept(MutableMapping): def __init__(self, delegatee, pluginloader, *args, **kwargs): super(JinjaPluginIntercept, self).__init__(*args, **kwargs) self._delegatee = delegatee self._pluginloader = pluginloader if self._pluginloader.class_name == 'FilterModule': self._method_map_name = 'filters' self._dirname = 'filter' elif self._pluginloader.class_name == 'TestModule': self._method_map_name = 'tests' self._dirname = 'test' self._collection_jinja_func_cache = {} # FUTURE: we can cache FQ filter/test calls for the entire duration of a run, since a given collection's impl's # aren't supposed to change during a run def __getitem__(self, key): try: if not isinstance(key, string_types): raise ValueError('key must be a string') key = to_native(key) if '.' not in key: # might be a built-in or legacy, check the delegatee dict first, then try for a last-chance base redirect func = self._delegatee.get(key) if func: return func # didn't find it in the pre-built Jinja env, assume it's a former builtin and follow the normal routing path leaf_key = key key = 'ansible.builtin.' + key else: leaf_key = key.split('.')[-1] acr = AnsibleCollectionRef.try_parse_fqcr(key, self._dirname) if not acr: raise KeyError('invalid plugin name: {0}'.format(key)) ts = _get_collection_metadata(acr.collection) # TODO: implement support for collection-backed redirect (currently only builtin) # TODO: implement cycle detection (unified across collection redir as well) routing_entry = ts.get('plugin_routing', {}).get(self._dirname, {}).get(leaf_key, {}) deprecation_entry = routing_entry.get('deprecation') if deprecation_entry: warning_text = deprecation_entry.get('warning_text') removal_date = deprecation_entry.get('removal_date') removal_version = deprecation_entry.get('removal_version') if not warning_text: warning_text = '{0} "{1}" is deprecated'.format(self._dirname, key) display.deprecated(warning_text, version=removal_version, date=removal_date, collection_name=acr.collection) tombstone_entry = routing_entry.get('tombstone') if tombstone_entry: warning_text = tombstone_entry.get('warning_text') removal_date = tombstone_entry.get('removal_date') removal_version = tombstone_entry.get('removal_version') if not warning_text: warning_text = '{0} "{1}" has been removed'.format(self._dirname, key) exc_msg = display.get_deprecation_message(warning_text, version=removal_version, date=removal_date, collection_name=acr.collection, removed=True) raise AnsiblePluginRemovedError(exc_msg) redirect_fqcr = routing_entry.get('redirect', None) if redirect_fqcr: acr = AnsibleCollectionRef.from_fqcr(ref=redirect_fqcr, ref_type=self._dirname) display.vvv('redirecting {0} {1} to {2}.{3}'.format(self._dirname, key, acr.collection, acr.resource)) key = redirect_fqcr # TODO: handle recursive forwarding (not necessary for builtin, but definitely for further collection redirs) func = self._collection_jinja_func_cache.get(key) if func: return func try: pkg = import_module(acr.n_python_package_name) except ImportError: raise KeyError() parent_prefix = acr.collection if acr.subdirs: parent_prefix = '{0}.{1}'.format(parent_prefix, acr.subdirs) # TODO: implement collection-level redirect for dummy, module_name, ispkg in pkgutil.iter_modules(pkg.__path__, prefix=parent_prefix + '.'): if ispkg: continue try: plugin_impl = self._pluginloader.get(module_name) except Exception as e: raise TemplateSyntaxError(to_native(e), 0) method_map = getattr(plugin_impl, self._method_map_name) for f in iteritems(method_map()): fq_name = '.'.join((parent_prefix, f[0])) # FIXME: detect/warn on intra-collection function name collisions self._collection_jinja_func_cache[fq_name] = _unroll_iterator(f[1]) function_impl = self._collection_jinja_func_cache[key] return function_impl except AnsiblePluginRemovedError as apre: raise TemplateSyntaxError(to_native(apre), 0) except KeyError: raise except Exception as ex: display.warning('an unexpected error occurred during Jinja2 environment setup: {0}'.format(to_native(ex))) display.vvv('exception during Jinja2 environment setup: {0}'.format(format_exc())) raise TemplateSyntaxError(to_native(ex), 0) def __setitem__(self, key, value): return self._delegatee.__setitem__(key, value) def __delitem__(self, key): raise NotImplementedError() def __iter__(self): # not strictly accurate since we're not counting dynamically-loaded values return iter(self._delegatee) def __len__(self): # not strictly accurate since we're not counting dynamically-loaded values return len(self._delegatee) class AnsibleEnvironment(Environment): ''' Our custom environment, which simply allows us to override the class-level values for the Template and Context classes used by jinja2 internally. ''' context_class = AnsibleContext template_class = AnsibleJ2Template def __init__(self, *args, **kwargs): super(AnsibleEnvironment, self).__init__(*args, **kwargs) self.filters = JinjaPluginIntercept(self.filters, filter_loader) self.tests = JinjaPluginIntercept(self.tests, test_loader) class Templar: ''' The main class for templating, with the main entry-point of template(). ''' def __init__(self, loader, shared_loader_obj=None, variables=None): variables = {} if variables is None else variables self._loader = loader self._filters = None self._tests = None self._available_variables = variables self._cached_result = {} if loader: self._basedir = loader.get_basedir() else: self._basedir = './' if shared_loader_obj: self._filter_loader = getattr(shared_loader_obj, 'filter_loader') self._test_loader = getattr(shared_loader_obj, 'test_loader') self._lookup_loader = getattr(shared_loader_obj, 'lookup_loader') else: self._filter_loader = filter_loader self._test_loader = test_loader self._lookup_loader = lookup_loader # flags to determine whether certain failures during templating # should result in fatal errors being raised self._fail_on_lookup_errors = True self._fail_on_filter_errors = True self._fail_on_undefined_errors = C.DEFAULT_UNDEFINED_VAR_BEHAVIOR self.environment = AnsibleEnvironment( trim_blocks=True, undefined=AnsibleUndefined, extensions=self._get_extensions(), finalize=self._finalize, loader=FileSystemLoader(self._basedir), ) # jinja2 global is inconsistent across versions, this normalizes them self.environment.globals['dict'] = dict # Custom globals self.environment.globals['lookup'] = self._lookup self.environment.globals['query'] = self.environment.globals['q'] = self._query_lookup self.environment.globals['now'] = self._now_datetime self.environment.globals['finalize'] = self._finalize # the current rendering context under which the templar class is working self.cur_context = None self.SINGLE_VAR = re.compile(r"^%s\s*(\w*)\s*%s$" % (self.environment.variable_start_string, self.environment.variable_end_string)) self._clean_regex = re.compile(r'(?:%s|%s|%s|%s)' % ( self.environment.variable_start_string, self.environment.block_start_string, self.environment.block_end_string, self.environment.variable_end_string )) self._no_type_regex = re.compile(r'.*?\|\s*(?:%s)(?:\([^\|]*\))?\s*\)?\s*(?:%s)' % ('|'.join(C.STRING_TYPE_FILTERS), self.environment.variable_end_string)) def _get_filters(self): ''' Returns filter plugins, after loading and caching them if need be ''' if self._filters is not None: return self._filters.copy() self._filters = dict() for fp in self._filter_loader.all(): self._filters.update(fp.filters()) return self._filters.copy() def _get_tests(self): ''' Returns tests plugins, after loading and caching them if need be ''' if self._tests is not None: return self._tests.copy() self._tests = dict() for fp in self._test_loader.all(): self._tests.update(fp.tests()) return self._tests.copy() def _get_extensions(self): ''' Return jinja2 extensions to load. If some extensions are set via jinja_extensions in ansible.cfg, we try to load them with the jinja environment. ''' jinja_exts = [] if C.DEFAULT_JINJA2_EXTENSIONS: # make sure the configuration directive doesn't contain spaces # and split extensions in an array jinja_exts = C.DEFAULT_JINJA2_EXTENSIONS.replace(" ", "").split(',') return jinja_exts @property def available_variables(self): return self._available_variables @available_variables.setter def available_variables(self, variables): ''' Sets the list of template variables this Templar instance will use to template things, so we don't have to pass them around between internal methods. We also clear the template cache here, as the variables are being changed. ''' if not isinstance(variables, Mapping): raise AnsibleAssertionError("the type of 'variables' should be a Mapping but was a %s" % (type(variables))) self._available_variables = variables self._cached_result = {} def set_available_variables(self, variables): display.deprecated( 'set_available_variables is being deprecated. Use "@available_variables.setter" instead.', version='2.13', collection_name='ansible.builtin' ) self.available_variables = variables @contextmanager def set_temporary_context(self, **kwargs): """Context manager used to set temporary templating context, without having to worry about resetting original values afterward Use a keyword that maps to the attr you are setting. Applies to ``self.environment`` by default, to set context on another object, it must be in ``mapping``. """ mapping = { 'available_variables': self, 'searchpath': self.environment.loader, } original = {} for key, value in kwargs.items(): obj = mapping.get(key, self.environment) try: original[key] = getattr(obj, key) if value is not None: setattr(obj, key, value) except AttributeError: # Ignore invalid attrs, lstrip_blocks was added in jinja2==2.7 pass yield for key in original: obj = mapping.get(key, self.environment) setattr(obj, key, original[key]) def template(self, variable, convert_bare=False, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None, convert_data=True, static_vars=None, cache=True, disable_lookups=False): ''' Templates (possibly recursively) any given data as input. If convert_bare is set to True, the given data will be wrapped as a jinja2 variable ('{{foo}}') before being sent through the template engine. ''' static_vars = [''] if static_vars is None else static_vars # Don't template unsafe variables, just return them. if hasattr(variable, '__UNSAFE__'): return variable if fail_on_undefined is None: fail_on_undefined = self._fail_on_undefined_errors try: if convert_bare: variable = self._convert_bare_variable(variable) if isinstance(variable, string_types): result = variable if self.is_possibly_template(variable): # Check to see if the string we are trying to render is just referencing a single # var. In this case we don't want to accidentally change the type of the variable # to a string by using the jinja template renderer. We just want to pass it. only_one = self.SINGLE_VAR.match(variable) if only_one: var_name = only_one.group(1) if var_name in self._available_variables: resolved_val = self._available_variables[var_name] if isinstance(resolved_val, NON_TEMPLATED_TYPES): return resolved_val elif resolved_val is None: return C.DEFAULT_NULL_REPRESENTATION # Using a cache in order to prevent template calls with already templated variables sha1_hash = None if cache: variable_hash = sha1(text_type(variable).encode('utf-8')) options_hash = sha1( ( text_type(preserve_trailing_newlines) + text_type(escape_backslashes) + text_type(fail_on_undefined) + text_type(overrides) ).encode('utf-8') ) sha1_hash = variable_hash.hexdigest() + options_hash.hexdigest() if cache and sha1_hash in self._cached_result: result = self._cached_result[sha1_hash] else: result = self.do_template( variable, preserve_trailing_newlines=preserve_trailing_newlines, escape_backslashes=escape_backslashes, fail_on_undefined=fail_on_undefined, overrides=overrides, disable_lookups=disable_lookups, ) if not USE_JINJA2_NATIVE: unsafe = hasattr(result, '__UNSAFE__') if convert_data and not self._no_type_regex.match(variable): # if this looks like a dictionary or list, convert it to such using the safe_eval method if (result.startswith("{") and not result.startswith(self.environment.variable_start_string)) or \ result.startswith("[") or result in ("True", "False"): eval_results = safe_eval(result, include_exceptions=True) if eval_results[1] is None: result = eval_results[0] if unsafe: result = wrap_var(result) else: # FIXME: if the safe_eval raised an error, should we do something with it? pass # we only cache in the case where we have a single variable # name, to make sure we're not putting things which may otherwise # be dynamic in the cache (filters, lookups, etc.) if cache and only_one: self._cached_result[sha1_hash] = result return result elif is_sequence(variable): return [self.template( v, preserve_trailing_newlines=preserve_trailing_newlines, fail_on_undefined=fail_on_undefined, overrides=overrides, disable_lookups=disable_lookups, ) for v in variable] elif isinstance(variable, Mapping): d = {} # we don't use iteritems() here to avoid problems if the underlying dict # changes sizes due to the templating, which can happen with hostvars for k in variable.keys(): if k not in static_vars: d[k] = self.template( variable[k], preserve_trailing_newlines=preserve_trailing_newlines, fail_on_undefined=fail_on_undefined, overrides=overrides, disable_lookups=disable_lookups, ) else: d[k] = variable[k] return d else: return variable except AnsibleFilterError: if self._fail_on_filter_errors: raise else: return variable def is_template(self, data): '''lets us know if data has a template''' if isinstance(data, string_types): return is_template(data, self.environment) elif isinstance(data, (list, tuple)): for v in data: if self.is_template(v): return True elif isinstance(data, dict): for k in data: if self.is_template(k) or self.is_template(data[k]): return True return False templatable = is_template def is_possibly_template(self, data): '''Determines if a string looks like a template, by seeing if it contains a jinja2 start delimiter. Does not guarantee that the string is actually a template. This is different than ``is_template`` which is more strict. This method may return ``True`` on a string that is not templatable. Useful when guarding passing a string for templating, but when you want to allow the templating engine to make the final assessment which may result in ``TemplateSyntaxError``. ''' env = self.environment if isinstance(data, string_types): for marker in (env.block_start_string, env.variable_start_string, env.comment_start_string): if marker in data: return True return False def _convert_bare_variable(self, variable): ''' Wraps a bare string, which may have an attribute portion (ie. foo.bar) in jinja2 variable braces so that it is evaluated properly. ''' if isinstance(variable, string_types): contains_filters = "|" in variable first_part = variable.split("|")[0].split(".")[0].split("[")[0] if (contains_filters or first_part in self._available_variables) and self.environment.variable_start_string not in variable: return "%s%s%s" % (self.environment.variable_start_string, variable, self.environment.variable_end_string) # the variable didn't meet the conditions to be converted, # so just return it as-is return variable def _finalize(self, thing): ''' A custom finalize method for jinja2, which prevents None from being returned. This avoids a string of ``"None"`` as ``None`` has no importance in YAML. If using ANSIBLE_JINJA2_NATIVE we bypass this and return the actual value always ''' if _is_rolled(thing): # Auto unroll a generator, so that users are not required to # explicitly use ``|list`` to unroll # This only affects the scenario where the final result of templating # is a generator, and not where a filter creates a generator in the middle # of a template. See ``_unroll_iterator`` for the other case. This is probably # unncessary return list(thing) if USE_JINJA2_NATIVE: return thing return thing if thing is not None else '' def _fail_lookup(self, name, *args, **kwargs): raise AnsibleError("The lookup `%s` was found, however lookups were disabled from templating" % name) def _now_datetime(self, utc=False, fmt=None): '''jinja2 global function to return current datetime, potentially formatted via strftime''' if utc: now = datetime.datetime.utcnow() else: now = datetime.datetime.now() if fmt: return now.strftime(fmt) return now def _query_lookup(self, name, *args, **kwargs): ''' wrapper for lookup, force wantlist true''' kwargs['wantlist'] = True return self._lookup(name, *args, **kwargs) def _lookup(self, name, *args, **kwargs): instance = self._lookup_loader.get(name, loader=self._loader, templar=self) if instance is not None: wantlist = kwargs.pop('wantlist', False) allow_unsafe = kwargs.pop('allow_unsafe', C.DEFAULT_ALLOW_UNSAFE_LOOKUPS) errors = kwargs.pop('errors', 'strict') from ansible.utils.listify import listify_lookup_plugin_terms loop_terms = listify_lookup_plugin_terms(terms=args, templar=self, loader=self._loader, fail_on_undefined=True, convert_bare=False) # safely catch run failures per #5059 try: ran = instance.run(loop_terms, variables=self._available_variables, **kwargs) except (AnsibleUndefinedVariable, UndefinedError) as e: raise AnsibleUndefinedVariable(e) except Exception as e: if self._fail_on_lookup_errors: msg = u"An unhandled exception occurred while running the lookup plugin '%s'. Error was a %s, original message: %s" % \ (name, type(e), to_text(e)) if errors == 'warn': display.warning(msg) elif errors == 'ignore': display.display(msg, log_only=True) else: raise AnsibleError(to_native(msg)) ran = [] if wantlist else None if ran and not allow_unsafe: if wantlist: ran = wrap_var(ran) else: try: ran = wrap_var(",".join(ran)) except TypeError: # Lookup Plugins should always return lists. Throw an error if that's not # the case: if not isinstance(ran, Sequence): raise AnsibleError("The lookup plugin '%s' did not return a list." % name) # The TypeError we can recover from is when the value *inside* of the list # is not a string if len(ran) == 1: ran = wrap_var(ran[0]) else: ran = wrap_var(ran) if self.cur_context: self.cur_context.unsafe = True return ran else: raise AnsibleError("lookup plugin (%s) not found" % name) def do_template(self, data, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None, disable_lookups=False): if USE_JINJA2_NATIVE and not isinstance(data, string_types): return data # For preserving the number of input newlines in the output (used # later in this method) data_newlines = _count_newlines_from_end(data) if fail_on_undefined is None: fail_on_undefined = self._fail_on_undefined_errors try: # allows template header overrides to change jinja2 options. if overrides is None: myenv = self.environment.overlay() else: myenv = self.environment.overlay(overrides) # Get jinja env overrides from template if hasattr(data, 'startswith') and data.startswith(JINJA2_OVERRIDE): eol = data.find('\n') line = data[len(JINJA2_OVERRIDE):eol] data = data[eol + 1:] for pair in line.split(','): (key, val) = pair.split(':') key = key.strip() setattr(myenv, key, ast.literal_eval(val.strip())) # Adds Ansible custom filters and tests myenv.filters.update(self._get_filters()) for k in myenv.filters: myenv.filters[k] = _unroll_iterator(myenv.filters[k]) myenv.tests.update(self._get_tests()) if escape_backslashes: # Allow users to specify backslashes in playbooks as "\\" instead of as "\\\\". data = _escape_backslashes(data, myenv) try: t = myenv.from_string(data) except TemplateSyntaxError as e: raise AnsibleError("template error while templating string: %s. String: %s" % (to_native(e), to_native(data))) except Exception as e: if 'recursion' in to_native(e): raise AnsibleError("recursive loop detected in template string: %s" % to_native(data)) else: return data if disable_lookups: t.globals['query'] = t.globals['q'] = t.globals['lookup'] = self._fail_lookup jvars = AnsibleJ2Vars(self, t.globals) self.cur_context = new_context = t.new_context(jvars, shared=True) rf = t.root_render_func(new_context) try: res = j2_concat(rf) if getattr(new_context, 'unsafe', False): res = wrap_var(res) except TypeError as te: if 'AnsibleUndefined' in to_native(te): errmsg = "Unable to look up a name or access an attribute in template string (%s).\n" % to_native(data) errmsg += "Make sure your variable name does not contain invalid characters like '-': %s" % to_native(te) raise AnsibleUndefinedVariable(errmsg) else: display.debug("failing because of a type error, template data is: %s" % to_text(data)) raise AnsibleError("Unexpected templating type error occurred on (%s): %s" % (to_native(data), to_native(te))) if USE_JINJA2_NATIVE and not isinstance(res, string_types): return res if preserve_trailing_newlines: # The low level calls above do not preserve the newline # characters at the end of the input data, so we use the # calculate the difference in newlines and append them # to the resulting output for parity # # jinja2 added a keep_trailing_newline option in 2.7 when # creating an Environment. That would let us make this code # better (remove a single newline if # preserve_trailing_newlines is False). Once we can depend on # that version being present, modify our code to set that when # initializing self.environment and remove a single trailing # newline here if preserve_newlines is False. res_newlines = _count_newlines_from_end(res) if data_newlines > res_newlines: res += self.environment.newline_sequence * (data_newlines - res_newlines) return res except (UndefinedError, AnsibleUndefinedVariable) as e: if fail_on_undefined: raise AnsibleUndefinedVariable(e) else: display.debug("Ignoring undefined failure: %s" % to_text(e)) return data # for backwards compatibility in case anyone is using old private method directly _do_template = do_template
closed
ansible/ansible
https://github.com/ansible/ansible
70,984
[template] unhelpful error message for `x in y` if y undefined
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY I'm using the `template` module to generate some templates. If I say `x in y` and `y` is undefined, the error message is not at all specific. It prints out the entire contents of a file, possibly the wrong one, and doesn't even name which variable is undefined. Note that I expect this is not the right github repo to lodge this issue, since modules have been split into different repos. But I have no idea how to figure out which repo is the right repo. This itself is a documentation issue, #67939 . ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME template ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible 2.9.0 config file = /home/ec2-user/environment/repo/workload/ansible.cfg configured module search path = ['/home/ec2-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/ec2-user/.local/lib/python3.6/site-packages/ansible executable location = /home/ec2-user/.local/bin/ansible python version = 3.6.10 (default, Feb 10 2020, 19:55:14) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ``` ANSIBLE_PIPELINING(/home/ec2-user/environment/repo/workload/ansible.cfg) = True ANY_ERRORS_FATAL(/home/ec2-user/environment/repo/workload/ansible.cfg) = True CACHE_PLUGIN(/home/ec2-user/environment/repo/workload/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/home/ec2-user/environment/repo/workload/ansible.cfg) = ~/.ansible-facts-cache CACHE_PLUGIN_TIMEOUT(/home/ec2-user/environment/repo/workload/ansible.cfg) = 7200 DEFAULT_FORKS(/home/ec2-user/environment/repo/workload/ansible.cfg) = 50 DEFAULT_GATHERING(/home/ec2-user/environment/repo/workload/ansible.cfg) = no INVENTORY_UNPARSED_IS_FAILED(/home/ec2-user/environment/repo/workload/ansible.cfg) = True ``` ##### OS / ENVIRONMENT Amazon Linux. Python Jinja module 2.10.1 ##### STEPS TO REPRODUCE `playbook.yaml` ``` --- - hosts: localhost connection: local vars: x: 1 tasks: - template: src: "parent.j2" dest: "/tmp/out/parent-out.txt" trim_blocks: False ``` `parent.j2` ``` Here is the parent x = {{ x }} {% include 'child.j2' %} Back to the parent ``` `child.j2` ``` Here is the child x = {{ x }} {% if z in y %} {# this line will fail, because y is not defined #} {{ z }} {% endif %} ``` `ansible-playbook playbook.yaml` ##### EXPECTED RESULTS The same thing as when variables are undefined in other ways. ``` TASK [template] ************************************************************************************************************************************* fatal: [localhost]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'y' is undefined"} ``` Additionally the line number and file of the child template would be great. But at the very least I want the name of the undefined variable. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ``` fatal: [localhost]: FAILED! => changed=false msg: |- AnsibleUndefinedVariable: Unable to look up a name or access an attribute in template string (Here is the parent x = {{ x }} {% include 'child.j2' %} Back to the parent). Make sure your variable name does not contain invalid characters like '-': argument of type 'AnsibleUndefined' is not iterable ``` Note that: * the name of the undefined variable does not appear in the error message * the error message contains the contents of the `parent.j2` file, which had no errors in it, and no undefined variables in it * the error message does not contain the content or path or the file `child.j2` which had the error * if the `parent.j2` file is long, this error message can be very long, so it's not even clear which task failed until you scroll up several pages The only way to figure out what went wrong is to use a binary search, deleting half of `child.j2` at a time. Think of the case where you have a working template that uses a variable multiple times in the one template. Then you rename it elsewhere, and forget to rename it in the template. If you try to use the binary search method to find the error, you won't have any luck, because you'll end up in a situation where both halves have the same error. Note that you get the error without nested templating, if you have `{{ x in y }}` in the parent, and no `include`. So this bug isn't caused by nesting, it's just exacerbated, because when the undefined variable is in a nested template, the error message contains the wrong file contents entirely.
https://github.com/ansible/ansible/issues/70984
https://github.com/ansible/ansible/pull/70990
d62dffafb3671dfc331eef6a847dde05b24a73d0
bf7276a4e88de6e102ad06aa1d0716ae799d87ea
2020-07-30T02:12:28Z
python
2020-07-30T19:57:01Z
test/integration/targets/template/tasks/main.yml
# test code for the template module # (c) 2014, Michael DeHaan <[email protected]> # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. - set_fact: output_dir: "{{ lookup('env', 'OUTPUT_DIR') }}" - name: show python interpreter debug: msg: "{{ ansible_python['executable'] }}" - name: show jinja2 version debug: msg: "{{ lookup('pipe', '{{ ansible_python[\"executable\"] }} -c \"import jinja2; print(jinja2.__version__)\"') }}" - name: get default group shell: id -gn register: group - name: fill in a basic template template: src=foo.j2 dest={{output_dir}}/foo.templated mode=0644 register: template_result - assert: that: - "'changed' in template_result" - "'dest' in template_result" - "'group' in template_result" - "'gid' in template_result" - "'md5sum' in template_result" - "'checksum' in template_result" - "'owner' in template_result" - "'size' in template_result" - "'src' in template_result" - "'state' in template_result" - "'uid' in template_result" - name: verify that the file was marked as changed assert: that: - "template_result.changed == true" # Basic template with non-ascii names - name: Check that non-ascii source and dest work template: src: 'café.j2' dest: '{{ output_dir }}/café.txt' register: template_results - name: Check that the resulting file exists stat: path: '{{ output_dir }}/café.txt' register: stat_results - name: Check that template created the right file assert: that: - 'template_results is changed' - 'stat_results.stat["exists"]' # test for import with context on jinja-2.9 See https://github.com/ansible/ansible/issues/20494 - name: fill in a template using import with context ala issue 20494 template: src=import_with_context.j2 dest={{output_dir}}/import_with_context.templated mode=0644 register: template_result - name: copy known good import_with_context.expected into place copy: src=import_with_context.expected dest={{output_dir}}/import_with_context.expected - name: compare templated file to known good import_with_context shell: diff -uw {{output_dir}}/import_with_context.templated {{output_dir}}/import_with_context.expected register: diff_result - name: verify templated import_with_context matches known good assert: that: - 'diff_result.stdout == ""' - "diff_result.rc == 0" # test for nested include https://github.com/ansible/ansible/issues/34886 - name: test if parent variables are defined in nested include template: src=for_loop.j2 dest={{output_dir}}/for_loop.templated mode=0644 - name: save templated output shell: "cat {{output_dir}}/for_loop.templated" register: for_loop_out - debug: var=for_loop_out - name: verify variables got templated assert: that: - '"foo" in for_loop_out.stdout' - '"bar" in for_loop_out.stdout' - '"bam" in for_loop_out.stdout' # test for 'import as' on jinja-2.9 See https://github.com/ansible/ansible/issues/20494 - name: fill in a template using import as ala fails2 case in issue 20494 template: src=import_as.j2 dest={{output_dir}}/import_as.templated mode=0644 register: import_as_template_result - name: copy known good import_as.expected into place copy: src=import_as.expected dest={{output_dir}}/import_as.expected - name: compare templated file to known good import_as shell: diff -uw {{output_dir}}/import_as.templated {{output_dir}}/import_as.expected register: import_as_diff_result - name: verify templated import_as matches known good assert: that: - 'import_as_diff_result.stdout == ""' - "import_as_diff_result.rc == 0" # test for 'import as with context' on jinja-2.9 See https://github.com/ansible/ansible/issues/20494 - name: fill in a template using import as with context ala fails2 case in issue 20494 template: src=import_as_with_context.j2 dest={{output_dir}}/import_as_with_context.templated mode=0644 register: import_as_with_context_template_result - name: copy known good import_as_with_context.expected into place copy: src=import_as_with_context.expected dest={{output_dir}}/import_as_with_context.expected - name: compare templated file to known good import_as_with_context shell: diff -uw {{output_dir}}/import_as_with_context.templated {{output_dir}}/import_as_with_context.expected register: import_as_with_context_diff_result - name: verify templated import_as_with_context matches known good assert: that: - 'import_as_with_context_diff_result.stdout == ""' - "import_as_with_context_diff_result.rc == 0" # VERIFY trim_blocks - name: Render a template with "trim_blocks" set to False template: src: trim_blocks.j2 dest: "{{output_dir}}/trim_blocks_false.templated" trim_blocks: False register: trim_blocks_false_result - name: Get checksum of known good trim_blocks_false.expected stat: path: "{{role_path}}/files/trim_blocks_false.expected" register: trim_blocks_false_good - name: Verify templated trim_blocks_false matches known good using checksum assert: that: - "trim_blocks_false_result.checksum == trim_blocks_false_good.stat.checksum" - name: Render a template with "trim_blocks" set to True template: src: trim_blocks.j2 dest: "{{output_dir}}/trim_blocks_true.templated" trim_blocks: True register: trim_blocks_true_result - name: Get checksum of known good trim_blocks_true.expected stat: path: "{{role_path}}/files/trim_blocks_true.expected" register: trim_blocks_true_good - name: Verify templated trim_blocks_true matches known good using checksum assert: that: - "trim_blocks_true_result.checksum == trim_blocks_true_good.stat.checksum" # VERIFY lstrip_blocks - name: Check support for lstrip_blocks in Jinja2 shell: "{{ ansible_python.executable }} -c 'import jinja2; jinja2.defaults.LSTRIP_BLOCKS'" register: lstrip_block_support ignore_errors: True - name: Render a template with "lstrip_blocks" set to False template: src: lstrip_blocks.j2 dest: "{{output_dir}}/lstrip_blocks_false.templated" lstrip_blocks: False register: lstrip_blocks_false_result - name: Get checksum of known good lstrip_blocks_false.expected stat: path: "{{role_path}}/files/lstrip_blocks_false.expected" register: lstrip_blocks_false_good - name: Verify templated lstrip_blocks_false matches known good using checksum assert: that: - "lstrip_blocks_false_result.checksum == lstrip_blocks_false_good.stat.checksum" - name: Render a template with "lstrip_blocks" set to True template: src: lstrip_blocks.j2 dest: "{{output_dir}}/lstrip_blocks_true.templated" lstrip_blocks: True register: lstrip_blocks_true_result ignore_errors: True - name: Verify exception is thrown if Jinja2 does not support lstrip_blocks but lstrip_blocks is used assert: that: - "lstrip_blocks_true_result.failed" - 'lstrip_blocks_true_result.msg is search(">=2.7")' when: "lstrip_block_support is failed" - name: Get checksum of known good lstrip_blocks_true.expected stat: path: "{{role_path}}/files/lstrip_blocks_true.expected" register: lstrip_blocks_true_good when: "lstrip_block_support is successful" - name: Verify templated lstrip_blocks_true matches known good using checksum assert: that: - "lstrip_blocks_true_result.checksum == lstrip_blocks_true_good.stat.checksum" when: "lstrip_block_support is successful" # VERIFY CONTENTS - name: check what python version ansible is running on command: "{{ ansible_python.executable }} -c 'import distutils.sysconfig ; print(distutils.sysconfig.get_python_version())'" register: pyver delegate_to: localhost - name: copy known good into place copy: src=foo.txt dest={{output_dir}}/foo.txt - name: compare templated file to known good shell: diff -uw {{output_dir}}/foo.templated {{output_dir}}/foo.txt register: diff_result - name: verify templated file matches known good assert: that: - 'diff_result.stdout == ""' - "diff_result.rc == 0" # VERIFY MODE - name: set file mode file: path={{output_dir}}/foo.templated mode=0644 register: file_result - name: ensure file mode did not change assert: that: - "file_result.changed != True" # VERIFY dest as a directory does not break file attributes # Note: expanduser is needed to go down the particular codepath that was broken before - name: setup directory for test file: state=directory dest={{output_dir | expanduser}}/template-dir mode=0755 owner=nobody group={{ group.stdout }} - name: set file mode when the destination is a directory template: src=foo.j2 dest={{output_dir | expanduser}}/template-dir/ mode=0600 owner=root group={{ group.stdout }} - name: set file mode when the destination is a directory template: src=foo.j2 dest={{output_dir | expanduser}}/template-dir/ mode=0600 owner=root group={{ group.stdout }} register: file_result - name: check that the file has the correct attributes stat: path={{output_dir | expanduser}}/template-dir/foo.j2 register: file_attrs - assert: that: - "file_attrs.stat.uid == 0" - "file_attrs.stat.pw_name == 'root'" - "file_attrs.stat.mode == '0600'" - name: check that the containing directory did not change attributes stat: path={{output_dir | expanduser}}/template-dir/ register: dir_attrs - assert: that: - "dir_attrs.stat.uid != 0" - "dir_attrs.stat.pw_name == 'nobody'" - "dir_attrs.stat.mode == '0755'" - name: Check that template to a directory where the directory does not end with a / is allowed template: src=foo.j2 dest={{output_dir | expanduser}}/template-dir mode=0600 owner=root group={{ group.stdout }} - name: make a symlink to the templated file file: path: '{{ output_dir }}/foo.symlink' src: '{{ output_dir }}/foo.templated' state: link - name: check that templating the symlink results in the file being templated template: src: foo.j2 dest: '{{output_dir}}/foo.symlink' mode: 0600 follow: True register: template_result - assert: that: - "template_result.changed == True" - name: check that the file has the correct attributes stat: path={{output_dir | expanduser}}/template-dir/foo.j2 register: file_attrs - assert: that: - "file_attrs.stat.mode == '0600'" - name: check that templating the symlink again makes no changes template: src: foo.j2 dest: '{{output_dir}}/foo.symlink' mode: 0600 follow: True register: template_result - assert: that: - "template_result.changed == False" # Test strange filenames - name: Create a temp dir for filename tests file: state: directory dest: '{{ output_dir }}/filename-tests' - name: create a file with an unusual filename template: src: foo.j2 dest: "{{ output_dir }}/filename-tests/foo t'e~m\\plated" register: template_result - assert: that: - "template_result.changed == True" - name: check that the unusual filename was created command: "ls {{ output_dir }}/filename-tests/" register: unusual_results - assert: that: - "\"foo t'e~m\\plated\" in unusual_results.stdout_lines" - "{{unusual_results.stdout_lines| length}} == 1" - name: check that the unusual filename can be checked for changes template: src: foo.j2 dest: "{{ output_dir }}/filename-tests/foo t'e~m\\plated" register: template_result - assert: that: - "template_result.changed == False" # check_mode - name: fill in a basic template in check mode template: src=short.j2 dest={{output_dir}}/short.templated register: template_result check_mode: True - name: check file exists stat: path={{output_dir}}/short.templated register: templated - name: verify that the file was marked as changed in check mode but was not created assert: that: - "not templated.stat.exists" - "template_result is changed" - name: fill in a basic template template: src=short.j2 dest={{output_dir}}/short.templated - name: fill in a basic template in check mode template: src=short.j2 dest={{output_dir}}/short.templated register: template_result check_mode: True - name: verify that the file was marked as not changes in check mode assert: that: - "template_result is not changed" - "'templated_var_loaded' in lookup('file', output_dir + '/short.templated')" - name: change var for the template set_fact: templated_var: "changed" - name: fill in a basic template with changed var in check mode template: src=short.j2 dest={{output_dir}}/short.templated register: template_result check_mode: True - name: verify that the file was marked as changed in check mode but the content was not changed assert: that: - "'templated_var_loaded' in lookup('file', output_dir + '/short.templated')" - "template_result is changed" # Create a template using a child template, to ensure that variables # are passed properly from the parent to subtemplate context (issue #20063) - name: test parent and subtemplate creation of context template: src=parent.j2 dest={{output_dir}}/parent_and_subtemplate.templated register: template_result - stat: path={{output_dir}}/parent_and_subtemplate.templated - name: verify that the parent and subtemplate creation worked assert: that: - "template_result is changed" # # template module can overwrite a file that's been hard linked # https://github.com/ansible/ansible/issues/10834 # - name: ensure test dir is absent file: path: '{{ output_dir | expanduser }}/hlink_dir' state: absent - name: create test dir file: path: '{{ output_dir | expanduser }}/hlink_dir' state: directory - name: template out test file to system 1 template: src: foo.j2 dest: '{{ output_dir | expanduser }}/hlink_dir/test_file' - name: make hard link file: src: '{{ output_dir | expanduser }}/hlink_dir/test_file' dest: '{{ output_dir | expanduser }}/hlink_dir/test_file_hlink' state: hard - name: template out test file to system 2 template: src: foo.j2 dest: '{{ output_dir | expanduser }}/hlink_dir/test_file' register: hlink_result - name: check that the files are still hardlinked stat: path: '{{ output_dir | expanduser }}/hlink_dir/test_file' register: orig_file - name: check that the files are still hardlinked stat: path: '{{ output_dir | expanduser }}/hlink_dir/test_file_hlink' register: hlink_file # We've done nothing at this point to update the content of the file so it should still be hardlinked - assert: that: - "hlink_result.changed == False" - "orig_file.stat.inode == hlink_file.stat.inode" - name: change var for the template set_fact: templated_var: "templated_var_loaded" # UNIX TEMPLATE - name: fill in a basic template (Unix) template: src: foo2.j2 dest: '{{ output_dir }}/foo.unix.templated' register: template_result - name: verify that the file was marked as changed (Unix) assert: that: - 'template_result is changed' - name: fill in a basic template again (Unix) template: src: foo2.j2 dest: '{{ output_dir }}/foo.unix.templated' register: template_result2 - name: verify that the template was not changed (Unix) assert: that: - 'template_result2 is not changed' # VERIFY UNIX CONTENTS - name: copy known good into place (Unix) copy: src: foo.unix.txt dest: '{{ output_dir }}/foo.unix.txt' - name: Dump templated file (Unix) command: hexdump -C {{ output_dir }}/foo.unix.templated - name: Dump expected file (Unix) command: hexdump -C {{ output_dir }}/foo.unix.txt - name: compare templated file to known good (Unix) command: diff -u {{ output_dir }}/foo.unix.templated {{ output_dir }}/foo.unix.txt register: diff_result - name: verify templated file matches known good (Unix) assert: that: - 'diff_result.stdout == ""' - "diff_result.rc == 0" # DOS TEMPLATE - name: fill in a basic template (DOS) template: src: foo2.j2 dest: '{{ output_dir }}/foo.dos.templated' newline_sequence: '\r\n' register: template_result - name: verify that the file was marked as changed (DOS) assert: that: - 'template_result is changed' - name: fill in a basic template again (DOS) template: src: foo2.j2 dest: '{{ output_dir }}/foo.dos.templated' newline_sequence: '\r\n' register: template_result2 - name: verify that the template was not changed (DOS) assert: that: - 'template_result2 is not changed' # VERIFY DOS CONTENTS - name: copy known good into place (DOS) copy: src: foo.dos.txt dest: '{{ output_dir }}/foo.dos.txt' - name: Dump templated file (DOS) command: hexdump -C {{ output_dir }}/foo.dos.templated - name: Dump expected file (DOS) command: hexdump -C {{ output_dir }}/foo.dos.txt - name: compare templated file to known good (DOS) command: diff -u {{ output_dir }}/foo.dos.templated {{ output_dir }}/foo.dos.txt register: diff_result - name: verify templated file matches known good (DOS) assert: that: - 'diff_result.stdout == ""' - "diff_result.rc == 0" # VERIFY DOS CONTENTS - name: copy known good into place (Unix) copy: src: foo.unix.txt dest: '{{ output_dir }}/foo.unix.txt' - name: Dump templated file (Unix) command: hexdump -C {{ output_dir }}/foo.unix.templated - name: Dump expected file (Unix) command: hexdump -C {{ output_dir }}/foo.unix.txt - name: compare templated file to known good (Unix) command: diff -u {{ output_dir }}/foo.unix.templated {{ output_dir }}/foo.unix.txt register: diff_result - name: verify templated file matches known good (Unix) assert: that: - 'diff_result.stdout == ""' - "diff_result.rc == 0" # Check that mode=preserve works with template - name: Create a template which has strange permissions copy: content: !unsafe '{{ ansible_managed }}\n' dest: '{{ output_dir }}/foo-template.j2' mode: 0547 delegate_to: localhost - name: Use template with mode=preserve template: src: '{{ output_dir }}/foo-template.j2' dest: '{{ output_dir }}/foo-templated.txt' mode: 'preserve' register: template_results - name: Get permissions from the templated file stat: path: '{{ output_dir }}/foo-templated.txt' register: stat_results - name: Check that the resulting file has the correct permissions assert: that: - 'template_results is changed' - 'template_results.mode == "0547"' - 'stat_results.stat["mode"] == "0547"' # Test output_encoding - name: Prepare the list of encodings we want to check, including empty string for defaults set_fact: template_encoding_1252_encodings: ['', 'utf-8', 'windows-1252'] - name: Copy known good encoding_1252_*.expected into place copy: src: 'encoding_1252_{{ item | default("utf-8", true) }}.expected' dest: '{{ output_dir }}/encoding_1252_{{ item }}.expected' loop: '{{ template_encoding_1252_encodings }}' - name: Generate the encoding_1252_* files from templates using various encoding combinations template: src: 'encoding_1252.j2' dest: '{{ output_dir }}/encoding_1252_{{ item }}.txt' output_encoding: '{{ item }}' loop: '{{ template_encoding_1252_encodings }}' - name: Compare the encoding_1252_* templated files to known good command: diff -u {{ output_dir }}/encoding_1252_{{ item }}.expected {{ output_dir }}/encoding_1252_{{ item }}.txt register: encoding_1252_diff_result loop: '{{ template_encoding_1252_encodings }}' - name: Check that nested undefined values return Undefined vars: dict_var: bar: {} list_var: - foo: {} assert: that: - dict_var is defined - dict_var.bar is defined - dict_var.bar.baz is not defined - dict_var.bar.baz | default('DEFAULT') == 'DEFAULT' - dict_var.bar.baz.abc is not defined - dict_var.bar.baz.abc | default('DEFAULT') == 'DEFAULT' - dict_var.baz is not defined - dict_var.baz.abc is not defined - dict_var.baz.abc | default('DEFAULT') == 'DEFAULT' - list_var.0 is defined - list_var.1 is not defined - list_var.0.foo is defined - list_var.0.foo.bar is not defined - list_var.0.foo.bar | default('DEFAULT') == 'DEFAULT' - list_var.1.foo is not defined - list_var.1.foo | default('DEFAULT') == 'DEFAULT' - dict_var is defined - dict_var['bar'] is defined - dict_var['bar']['baz'] is not defined - dict_var['bar']['baz'] | default('DEFAULT') == 'DEFAULT' - dict_var['bar']['baz']['abc'] is not defined - dict_var['bar']['baz']['abc'] | default('DEFAULT') == 'DEFAULT' - dict_var['baz'] is not defined - dict_var['baz']['abc'] is not defined - dict_var['baz']['abc'] | default('DEFAULT') == 'DEFAULT' - list_var[0] is defined - list_var[1] is not defined - list_var[0]['foo'] is defined - list_var[0]['foo']['bar'] is not defined - list_var[0]['foo']['bar'] | default('DEFAULT') == 'DEFAULT' - list_var[1]['foo'] is not defined - list_var[1]['foo'] | default('DEFAULT') == 'DEFAULT' - dict_var['bar'].baz is not defined - dict_var['bar'].baz | default('DEFAULT') == 'DEFAULT' - template: src: template_destpath_test.j2 dest: "{{ output_dir }}/template_destpath.templated" - copy: content: "{{ output_dir}}/template_destpath.templated\n" dest: "{{ output_dir }}/template_destpath.expected" - name: compare templated file to known good template_destpath shell: diff -uw {{output_dir}}/template_destpath.templated {{output_dir}}/template_destpath.expected register: diff_result - name: verify templated template_destpath matches known good assert: that: - 'diff_result.stdout == ""' - "diff_result.rc == 0" # aliases file requires root for template tests so this should be safe - include: backup_test.yml
closed
ansible/ansible
https://github.com/ansible/ansible
70,348
loosing all ansible binaries when pip upgrade from 2.9 to 2.10.0a1
## SUMMARY When I followed the instructions provided in the mailing list to upgrade using a virtualenv ansible to 2.10.0a1 I lost all the binaries (ansible-playbook, ansible, …). To get them back I had to do a pip install ansible --force I reproduced the problem on another installation. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME pip packaging ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below -- empty ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Linux fedora 32, installation in a venv ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ``` - create a venv - install ansible current stable pre 2.10 - verify you have ansible binaries - upgrade to 2.10.0a1 using the command `pip install ansible==2.10.0a1` - observe there is no ansible binaries (ansible-devel) [baptistemm@cactus ansible]$ pip install ansible==2.10.0a1 Collecting ansible==2.10.0a1 Downloading ansible-2.10.0a1.tar.gz (22.2 MB) |████████████████████████████████| 22.2 MB 430 kB/s Collecting ansible-base<2.11,>=2.10.0.dev1 Downloading ansible-base-2.10.0b1.tar.gz (5.7 MB) |████████████████████████████████| 5.7 MB 7.9 MB/s Requirement already satisfied: jinja2 in /home/baptistemm/Code/ansible-devel/lib/python3.8/site-packages (from ansible-base<2.11,>=2.10.0.dev1->ansible==2.10.0a1) (2.11.1) Requirement already satisfied: PyYAML in /home/baptistemm/Code/ansible-devel/lib/python3.8/site-packages (from ansible-base<2.11,>=2.10.0.dev1->ansible==2.10.0a1) (5.3) Requirement already satisfied: cryptography in /home/baptistemm/Code/ansible-devel/lib/python3.8/site-packages (from ansible-base<2.11,>=2.10.0.dev1->ansible==2.10.0a1) (2.8) Collecting packaging Downloading packaging-20.4-py2.py3-none-any.whl (37 kB) Requirement already satisfied: MarkupSafe>=0.23 in /home/baptistemm/Code/ansible-devel/lib/python3.8/site-packages (from jinja2->ansible-base<2.11,>=2.10.0.dev1->ansible==2.10.0a1) (1.1.1) Requirement already satisfied: six>=1.4.1 in /home/baptistemm/Code/ansible-devel/lib/python3.8/site-packages (from cryptography->ansible-base<2.11,>=2.10.0.dev1->ansible==2.10.0a1) (1.14.0) Requirement already satisfied: cffi!=1.11.3,>=1.8 in /home/baptistemm/Code/ansible-devel/lib/python3.8/site-packages (from cryptography->ansible-base<2.11,>=2.10.0.dev1->ansible==2.10.0a1) (1.14.0) Collecting pyparsing>=2.0.2 Downloading pyparsing-2.4.7-py2.py3-none-any.whl (67 kB) |████████████████████████████████| 67 kB 1.0 MB/s Requirement already satisfied: pycparser in /home/baptistemm/Code/ansible-devel/lib/python3.8/site-packages (from cffi!=1.11.3,>=1.8->cryptography->ansible-base<2.11,>=2.10.0.dev1->ansible==2.10.0a1) (2.19) Building wheels for collected packages: ansible, ansible-base Building wheel for ansible (setup.py) ... done Created wheel for ansible: filename=ansible-2.10.0a1-py3-none-any.whl size=41534726 sha256=65ed15efa9139f28bf20c274b7edbec71c6c9b6f45dd610ebfac141d13ec7804 Stored in directory: /home/baptistemm/.cache/pip/wheels/23/02/1c/d650964a7ad3c83dc87a631ae9febc130acf306da38d67aaa1 Building wheel for ansible-base (setup.py) ... done Created wheel for ansible-base: filename=ansible_base-2.10.0b1-py3-none-any.whl size=1846180 sha256=028c5198f70dc5b703feec5bd699cc15bfd37e90fe5d0369066e909f6dc0fd1e Stored in directory: /home/baptistemm/.cache/pip/wheels/56/50/db/05621494e62f629d966912934e1ac6f62be07e572c9b1b7423 Successfully built ansible ansible-base Installing collected packages: pyparsing, packaging, ansible-base, ansible Attempting uninstall: ansible Found existing installation: ansible 2.9.10 Uninstalling ansible-2.9.10: Successfully uninstalled ansible-2.9.10 Successfully installed ansible-2.10.0a1 ansible-base-2.10.0b1 packaging-20.4 pyparsing-2.4.7 ``` ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> Have binaries installed ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/70348
https://github.com/ansible/ansible/pull/70768
75e8da09501dd9de565cb7854205b8e06615565f
50193356605a9f96caf7772c331ff0950afa4c7c
2020-06-28T13:34:47Z
python
2020-07-31T17:13:54Z
docs/docsite/rst/installation_guide/intro_installation.rst
.. _installation_guide: .. _intro_installation_guide: Installing Ansible =================== This page describes how to install Ansible on different platforms. Ansible is an agentless automation tool that by default manages machines over the SSH protocol. Once installed, Ansible does not add a database, and there will be no daemons to start or keep running. You only need to install it on one machine (which could easily be a laptop) and it can manage an entire fleet of remote machines from that central point. When Ansible manages remote machines, it does not leave software installed or running on them, so there's no real question about how to upgrade Ansible when moving to a new version. .. contents:: :local: Prerequisites -------------- You install Ansible on a control node, which then uses SSH (by default) to communicate with your managed nodes (those end devices you want to automate). .. _control_node_requirements: Control node requirements ^^^^^^^^^^^^^^^^^^^^^^^^^ Currently Ansible can be run from any machine with Python 2 (version 2.7) or Python 3 (versions 3.5 and higher) installed. This includes Red Hat, Debian, CentOS, macOS, any of the BSDs, and so on. Windows is not supported for the control node. When choosing a control node, bear in mind that any management system benefits from being run near the machines being managed. If you are running Ansible in a cloud, consider running it from a machine inside that cloud. In most cases this will work better than on the open Internet. .. note:: macOS by default is configured for a small number of file handles, so if you want to use 15 or more forks you'll need to raise the ulimit with ``sudo launchctl limit maxfiles unlimited``. This command can also fix any "Too many open files" error. .. warning:: Please note that some modules and plugins have additional requirements. For modules these need to be satisfied on the 'target' machine (the managed node) and should be listed in the module specific docs. .. _managed_node_requirements: Managed node requirements ^^^^^^^^^^^^^^^^^^^^^^^^^ On the managed nodes, you need a way to communicate, which is normally SSH. By default this uses SFTP. If that's not available, you can switch to SCP in :ref:`ansible.cfg <ansible_configuration_settings>`. You also need Python 2 (version 2.6 or later) or Python 3 (version 3.5 or later). .. note:: * If you have SELinux enabled on remote nodes, you will also want to install libselinux-python on them before using any copy/file/template related functions in Ansible. You can use the :ref:`yum module<yum_module>` or :ref:`dnf module<dnf_module>` in Ansible to install this package on remote systems that do not have it. * By default, Ansible uses the Python interpreter located at :file:`/usr/bin/python` to run its modules. However, some Linux distributions may only have a Python 3 interpreter installed to :file:`/usr/bin/python3` by default. On those systems, you may see an error like:: "module_stdout": "/bin/sh: /usr/bin/python: No such file or directory\r\n" you can either set the :ref:`ansible_python_interpreter<ansible_python_interpreter>` inventory variable (see :ref:`inventory`) to point at your interpreter or you can install a Python 2 interpreter for modules to use. You will still need to set :ref:`ansible_python_interpreter<ansible_python_interpreter>` if the Python 2 interpreter is not installed to :command:`/usr/bin/python`. * Ansible's :ref:`raw module<raw_module>`, and the :ref:`script module<script_module>`, do not depend on a client side install of Python to run. Technically, you can use Ansible to install a compatible version of Python using the :ref:`raw module<raw_module>`, which then allows you to use everything else. For example, if you need to bootstrap Python 2 onto a RHEL-based system, you can install it as follows: .. code-block:: shell $ ansible myhost --become -m raw -a "yum install -y python2" .. _what_version: Selecting an Ansible version to install --------------------------------------- Which Ansible version to install is based on your particular needs. You can choose any of the following ways to install Ansible: * Install the latest release with your OS package manager (for Red Hat Enterprise Linux (TM), CentOS, Fedora, Debian, or Ubuntu). * Install with ``pip`` (the Python package manager). * Install from source to access the development (``devel``) version to develop or test the latest features. .. note:: You should only run Ansible from ``devel`` if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. Ansible creates new releases two to three times a year. Due to this short release cycle, minor bugs will generally be fixed in the next release versus maintaining backports on the stable branch. Major bugs will still have maintenance releases when needed, though these are infrequent. .. _installing_the_control_node: .. _from_yum: Installing Ansible on RHEL, CentOS, or Fedora ---------------------------------------------- On Fedora: .. code-block:: bash $ sudo dnf install ansible On RHEL and CentOS: .. code-block:: bash $ sudo yum install ansible RPMs for RHEL 7 and RHEL 8 are available from the `Ansible Engine repository <https://access.redhat.com/articles/3174981>`_. To enable the Ansible Engine repository for RHEL 8, run the following command: .. code-block:: bash $ sudo subscription-manager repos --enable ansible-2.9-for-rhel-8-x86_64-rpms To enable the Ansible Engine repository for RHEL 7, run the following command: .. code-block:: bash $ sudo subscription-manager repos --enable rhel-7-server-ansible-2.9-rpms RPMs for currently supported versions of RHEL and CentOS are also available from `EPEL <https://fedoraproject.org/wiki/EPEL>`_. Ansible version 2.4 and later can manage older operating systems that contain Python 2.6 or higher. .. _from_apt: Installing Ansible on Ubuntu ---------------------------- Ubuntu builds are available `in a PPA here <https://launchpad.net/~ansible/+archive/ubuntu/ansible>`_. To configure the PPA on your machine and install Ansible run these commands: .. code-block:: bash $ sudo apt update $ sudo apt install software-properties-common $ sudo apt-add-repository --yes --update ppa:ansible/ansible $ sudo apt install ansible .. note:: On older Ubuntu distributions, "software-properties-common" is called "python-software-properties". You may want to use ``apt-get`` instead of ``apt`` in older versions. Also, be aware that only newer distributions (i.e. 18.04, 18.10, etc.) have a ``-u`` or ``--update`` flag, so adjust your script accordingly. Debian/Ubuntu packages can also be built from the source checkout, run: .. code-block:: bash $ make deb You may also wish to run from source to get the development branch, which is covered below. Installing Ansible on Debian ---------------------------- Debian users may leverage the same source as the Ubuntu PPA. Add the following line to /etc/apt/sources.list: .. code-block:: bash deb http://ppa.launchpad.net/ansible/ansible/ubuntu trusty main Then run these commands: .. code-block:: bash $ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 93C4A3FD7BB9C367 $ sudo apt update $ sudo apt install ansible .. note:: This method has been verified with the Trusty sources in Debian Jessie and Stretch but may not be supported in earlier versions. You may want to use ``apt-get`` instead of ``apt`` in older versions. Installing Ansible on Gentoo with portage ----------------------------------------- .. code-block:: bash $ emerge -av app-admin/ansible To install the newest version, you may need to unmask the Ansible package prior to emerging: .. code-block:: bash $ echo 'app-admin/ansible' >> /etc/portage/package.accept_keywords Installing Ansible on FreeBSD ----------------------------- Though Ansible works with both Python 2 and 3 versions, FreeBSD has different packages for each Python version. So to install you can use: .. code-block:: bash $ sudo pkg install py27-ansible or: .. code-block:: bash $ sudo pkg install py36-ansible You may also wish to install from ports, run: .. code-block:: bash $ sudo make -C /usr/ports/sysutils/ansible install You can also choose a specific version, i.e ``ansible25``. Older versions of FreeBSD worked with something like this (substitute for your choice of package manager): .. code-block:: bash $ sudo pkg install ansible .. _on_macos: Installing Ansible on macOS --------------------------- The preferred way to install Ansible on a Mac is with ``pip``. The instructions can be found in :ref:`from_pip`. If you are running macOS version 10.12 or older, then you should upgrade to the latest ``pip`` to connect to the Python Package Index securely. It should be noted that pip must be run as a module on macOS, and the linked ``pip`` instructions will show you how to do that. If you are installing on macOS Mavericks (10.9), you may encounter some noise from your compiler. A workaround is to do the following:: $ CFLAGS=-Qunused-arguments CPPFLAGS=-Qunused-arguments pip install --user ansible .. _from_pkgutil: Installing Ansible on Solaris ----------------------------- Ansible is available for Solaris as `SysV package from OpenCSW <https://www.opencsw.org/packages/ansible/>`_. .. code-block:: bash # pkgadd -d http://get.opencsw.org/now # /opt/csw/bin/pkgutil -i ansible .. _from_pacman: Installing Ansible on Arch Linux --------------------------------- Ansible is available in the Community repository:: $ pacman -S ansible The AUR has a PKGBUILD for pulling directly from GitHub called `ansible-git <https://aur.archlinux.org/packages/ansible-git>`_. Also see the `Ansible <https://wiki.archlinux.org/index.php/Ansible>`_ page on the ArchWiki. .. _from_sbopkg: Installing Ansible on Slackware Linux ------------------------------------- Ansible build script is available in the `SlackBuilds.org <https://slackbuilds.org/apps/ansible/>`_ repository. Can be built and installed using `sbopkg <https://sbopkg.org/>`_. Create queue with Ansible and all dependencies:: # sqg -p ansible Build and install packages from a created queuefile (answer Q for question if sbopkg should use queue or package):: # sbopkg -k -i ansible .. _from swupd: Installing Ansible on Clear Linux --------------------------------- Ansible and its dependencies are available as part of the sysadmin host management bundle:: $ sudo swupd bundle-add sysadmin-hostmgmt Update of the software will be managed by the swupd tool:: $ sudo swupd update .. _from_pip: Installing Ansible with ``pip`` -------------------------------- Ansible can be installed with ``pip``, the Python package manager. If ``pip`` isn't already available on your system of Python, run the following commands to install it:: $ curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py $ python get-pip.py --user Then install Ansible [1]_:: $ python -m pip install --user ansible In order to use the ``paramiko`` connection plugin or modules that require ``paramiko``, install the required module [2]_:: $ python -m pip install --user paramiko If you wish to install Ansible globally, run the following commands:: $ sudo python get-pip.py $ sudo python -m pip install ansible .. note:: Running ``pip`` with ``sudo`` will make global changes to the system. Since ``pip`` does not coordinate with system package managers, it could make changes to your system that leaves it in an inconsistent or non-functioning state. This is particularly true for macOS. Installing with ``--user`` is recommended unless you understand fully the implications of modifying global files on the system. .. note:: Older versions of ``pip`` default to http://pypi.python.org/simple, which no longer works. Please make sure you have the latest version of ``pip`` before installing Ansible. If you have an older version of ``pip`` installed, you can upgrade by following `pip's upgrade instructions <https://pip.pypa.io/en/stable/installing/#upgrading-pip>`_ . .. _from_pip_devel: Installing the development version of Ansible ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. note:: You should only run Ansible from ``devel`` if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. The development version of Ansible can be directly installed from GitHub with pip:: $ python -m pip install --user https://github.com/ansible/ansible/archive/devel.tar.gz Replace ``devel`` in the URL mentioned above, with any other branch or tag on GitHub to install that version:: $ python -m pip install --user https://github.com/ansible/ansible/archive/stable-2.9.tar.gz See :ref:`from_source` for instructions on how to run Ansible directly from source, without the requirement of installation. .. _from_pip_venv: Virtual Environments ^^^^^^^^^^^^^^^^^^^^ Ansible can also be installed inside a new or existing ``virtualenv``:: $ python -m virtualenv ansible # Create a virtualenv if one does not already exist $ source ansible/bin/activate # Activate the virtual environment $ python -m pip install ansible .. _from_source: Running Ansible from source (devel) ----------------------------------- .. note:: You should only run Ansible from ``devel`` if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. Ansible is easy to run from source. You do not need ``root`` permissions to use it and there is no software to actually install. No daemons or database setup are required. .. note:: If you want to use Ansible Tower as the control node, do not use a source installation of Ansible. Please use an OS package manager (like ``apt`` or ``yum``) or ``pip`` to install a stable version. To install from source, clone the Ansible git repository: .. code-block:: bash $ git clone https://github.com/ansible/ansible.git $ cd ./ansible Once ``git`` has cloned the Ansible repository, setup the Ansible environment: Using Bash: .. code-block:: bash $ source ./hacking/env-setup Using Fish:: $ source ./hacking/env-setup.fish If you want to suppress spurious warnings/errors, use:: $ source ./hacking/env-setup -q If you don't have ``pip`` installed in your version of Python, install it:: $ curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py $ python get-pip.py --user Ansible also uses the following Python modules that need to be installed [1]_: .. code-block:: bash $ python -m pip install --user -r ./requirements.txt To update Ansible checkouts, use pull-with-rebase so any local changes are replayed. .. code-block:: bash $ git pull --rebase .. code-block:: bash $ git pull --rebase #same as above $ git submodule update --init --recursive Once running the env-setup script you'll be running from checkout and the default inventory file will be ``/etc/ansible/hosts``. You can optionally specify an inventory file (see :ref:`inventory`) other than ``/etc/ansible/hosts``: .. code-block:: bash $ echo "127.0.0.1" > ~/ansible_hosts $ export ANSIBLE_INVENTORY=~/ansible_hosts You can read more about the inventory file at :ref:`inventory`. Now let's test things with a ping command: .. code-block:: bash $ ansible all -m ping --ask-pass You can also use "sudo make install". .. _tagged_releases: Finding tarballs of tagged releases ----------------------------------- Packaging Ansible or wanting to build a local package yourself, but don't want to do a git checkout? Tarballs of releases are available on the `Ansible downloads <https://releases.ansible.com/ansible>`_ page. These releases are also tagged in the `git repository <https://github.com/ansible/ansible/releases>`_ with the release version. .. _shell_completion: Ansible command shell completion -------------------------------- As of Ansible 2.9, shell completion of the Ansible command line utilities is available and provided through an optional dependency called ``argcomplete``. ``argcomplete`` supports bash, and has limited support for zsh and tcsh. You can install ``python-argcomplete`` from EPEL on Red Hat Enterprise based distributions, and or from the standard OS repositories for many other distributions. For more information about installing and configuration see the `argcomplete documentation <https://argcomplete.readthedocs.io/en/latest/>`_. Installing ``argcomplete`` on RHEL, CentOS, or Fedora ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ On Fedora: .. code-block:: bash $ sudo dnf install python-argcomplete On RHEL and CentOS: .. code-block:: bash $ sudo yum install epel-release $ sudo yum install python-argcomplete Installing ``argcomplete`` with ``apt`` ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. code-block:: bash $ sudo apt install python-argcomplete Installing ``argcomplete`` with ``pip`` ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. code-block:: bash $ python -m pip install argcomplete Configuring ``argcomplete`` ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ There are 2 ways to configure ``argcomplete`` to allow shell completion of the Ansible command line utilities: globally or per command. Globally """"""""" Global completion requires bash 4.2. .. code-block:: bash $ sudo activate-global-python-argcomplete This will write a bash completion file to a global location. Use ``--dest`` to change the location. Per command """"""""""" If you do not have bash 4.2, you must register each script independently. .. code-block:: bash $ eval $(register-python-argcomplete ansible) $ eval $(register-python-argcomplete ansible-config) $ eval $(register-python-argcomplete ansible-console) $ eval $(register-python-argcomplete ansible-doc) $ eval $(register-python-argcomplete ansible-galaxy) $ eval $(register-python-argcomplete ansible-inventory) $ eval $(register-python-argcomplete ansible-playbook) $ eval $(register-python-argcomplete ansible-pull) $ eval $(register-python-argcomplete ansible-vault) You should place the above commands into your shells profile file such as ``~/.profile`` or ``~/.bash_profile``. ``argcomplete`` with zsh or tcsh ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ See the `argcomplete documentation <https://argcomplete.readthedocs.io/en/latest/>`_. .. _getting_ansible: Ansible on GitHub ----------------- You may also wish to follow the `GitHub project <https://github.com/ansible/ansible>`_ if you have a GitHub account. This is also where we keep the issue tracker for sharing bugs and feature ideas. .. seealso:: :ref:`intro_adhoc` Examples of basic commands :ref:`working_with_playbooks` Learning ansible's configuration management language :ref:`installation_faqs` Ansible Installation related to FAQs `Mailing List <https://groups.google.com/group/ansible-project>`_ Questions? Help? Ideas? Stop by the list on Google Groups `irc.freenode.net <http://irc.freenode.net>`_ #ansible IRC chat channel .. [1] If you have issues with the "pycrypto" package install on macOS, then you may need to try ``CC=clang sudo -E pip install pycrypto``. .. [2] ``paramiko`` was included in Ansible's ``requirements.txt`` prior to 2.8.
closed
ansible/ansible
https://github.com/ansible/ansible
59,125
Documentation for `regex` test is lacking.
##### SUMMARY Please document the actual usage of the `regex` test (`… is regex(…)`). ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME `playbooks_test.rst` ##### ANSIBLE VERSION Latest on website and in github repo ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### ADDITIONAL INFORMATION Currently, the only documentation about the `regex` test is the following line: > By default, 'regex' works like search, but regex can be configured to perform other tests as well. There is NO documentation about how "regex can be configured to perform other tests as well". Please fully document this test. _(In general, why do you need your customers to tell you that you need to write documentation?)_
https://github.com/ansible/ansible/issues/59125
https://github.com/ansible/ansible/pull/71049
0c855dc70bde3b737329689823be03496026d976
701c638757949280c875edc0eb364ee0e63db4bb
2019-07-16T01:08:18Z
python
2020-08-03T15:19:20Z
docs/docsite/rst/user_guide/playbooks_tests.rst
.. _playbooks_tests: ***** Tests ***** `Tests <http://jinja.pocoo.org/docs/dev/templates/#tests>`_ in Jinja are a way of evaluating template expressions and returning True or False. Jinja ships with many of these. See `builtin tests`_ in the official Jinja template documentation. The main difference between tests and filters are that Jinja tests are used for comparisons, whereas filters are used for data manipulation, and have different applications in jinja. Tests can also be used in list processing filters, like ``map()`` and ``select()`` to choose items in the list. Like all templating, tests always execute on the Ansible controller, **not** on the target of a task, as they test local data. In addition to those Jinja2 tests, Ansible supplies a few more and users can easily create their own. .. contents:: :local: .. _test_syntax: Test syntax =========== `Test syntax <http://jinja.pocoo.org/docs/dev/templates/#tests>`_ varies from `filter syntax <http://jinja.pocoo.org/docs/dev/templates/#filters>`_ (``variable | filter``). Historically Ansible has registered tests as both jinja tests and jinja filters, allowing for them to be referenced using filter syntax. As of Ansible 2.5, using a jinja test as a filter will generate a warning. The syntax for using a jinja test is as follows:: variable is test_name Such as:: result is failed .. _testing_strings: Testing strings =============== To match strings against a substring or a regular expression, use the ``match``, ``search`` or ``regex`` tests:: vars: url: "http://example.com/users/foo/resources/bar" tasks: - debug: msg: "matched pattern 1" when: url is match("http://example.com/users/.*/resources/") - debug: msg: "matched pattern 2" when: url is search("/users/.*/resources/.*") - debug: msg: "matched pattern 3" when: url is search("/users/") - debug: msg: "matched pattern 4" when: url is regex("example.com/\w+/foo") ``match`` succeeds if it finds the pattern at the beginning of the string, while ``search`` succeeds if it finds the pattern anywhere within string. By default, ``regex`` works like ``search``, but ``regex`` can be configured to perform other tests as well. .. _testing_vault: Vault ===== .. versionadded:: 2.10 You can test whether a variable is an inline single vault encrypted value using the ``vault_encrypted`` test. .. code-block:: yaml vars: variable: !vault | $ANSIBLE_VAULT;1.2;AES256;dev 61323931353866666336306139373937316366366138656131323863373866376666353364373761 3539633234313836346435323766306164626134376564330a373530313635343535343133316133 36643666306434616266376434363239346433643238336464643566386135356334303736353136 6565633133366366360a326566323363363936613664616364623437336130623133343530333739 3039 tasks: - debug: msg: '{{ (variable is vault_encrypted) | ternary("Vault encrypted", "Not vault encrypted") }}' .. _testing_truthiness: Testing truthiness ================== .. versionadded:: 2.10 As of Ansible 2.10, you can now perform Python like truthy and falsy checks. .. code-block:: yaml - debug: msg: "Truthy" when: value is truthy vars: value: "some string" - debug: msg: "Falsy" when: value is falsy vars: value: "" Additionally, the ``truthy`` and ``falsy`` tests accept an optional parameter called ``convert_bool`` that will attempt to convert boolean indicators to actual booleans. .. code-block:: yaml - debug: msg: "Truthy" when: value is truthy(convert_bool=True) vars: value: "yes" - debug: msg: "Falsy" when: value is falsy(convert_bool=True) vars: value: "off" .. _testing_versions: Comparing versions ================== .. versionadded:: 1.6 .. note:: In 2.5 ``version_compare`` was renamed to ``version`` To compare a version number, such as checking if the ``ansible_facts['distribution_version']`` version is greater than or equal to '12.04', you can use the ``version`` test. The ``version`` test can also be used to evaluate the ``ansible_facts['distribution_version']``:: {{ ansible_facts['distribution_version'] is version('12.04', '>=') }} If ``ansible_facts['distribution_version']`` is greater than or equal to 12.04, this test returns True, otherwise False. The ``version`` test accepts the following operators:: <, lt, <=, le, >, gt, >=, ge, ==, =, eq, !=, <>, ne This test also accepts a 3rd parameter, ``strict`` which defines if strict version parsing as defined by ``distutils.version.StrictVersion`` should be used. The default is ``False`` (using ``distutils.version.LooseVersion``), ``True`` enables strict version parsing:: {{ sample_version_var is version('1.0', operator='lt', strict=True) }} When using ``version`` in a playbook or role, don't use ``{{ }}`` as described in the `FAQ <https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#when-should-i-use-also-how-to-interpolate-variables-or-dynamic-variable-names>`_:: vars: my_version: 1.2.3 tasks: - debug: msg: "my_version is higher than 1.0.0" when: my_version is version('1.0.0', '>') .. _math_tests: Set theory tests ================ .. versionadded:: 2.1 .. note:: In 2.5 ``issubset`` and ``issuperset`` were renamed to ``subset`` and ``superset`` To see if a list includes or is included by another list, you can use 'subset' and 'superset':: vars: a: [1,2,3,4,5] b: [2,3] tasks: - debug: msg: "A includes B" when: a is superset(b) - debug: msg: "B is included in A" when: b is subset(a) .. _contains_test: Testing if a list contains a value ================================== .. versionadded:: 2.8 Ansible includes a ``contains`` test which operates similarly, but in reverse of the Jinja2 provided ``in`` test. The ``contains`` test is designed to work with the ``select``, ``reject``, ``selectattr``, and ``rejectattr`` filters:: vars: lacp_groups: - master: lacp0 network: 10.65.100.0/24 gateway: 10.65.100.1 dns4: - 10.65.100.10 - 10.65.100.11 interfaces: - em1 - em2 - master: lacp1 network: 10.65.120.0/24 gateway: 10.65.120.1 dns4: - 10.65.100.10 - 10.65.100.11 interfaces: - em3 - em4 tasks: - debug: msg: "{{ (lacp_groups|selectattr('interfaces', 'contains', 'em1')|first).master }}" .. versionadded:: 2.4 Testing if a list value is True =============================== You can use `any` and `all` to check if any or all elements in a list are true or not:: vars: mylist: - 1 - "{{ 3 == 3 }}" - True myotherlist: - False - True tasks: - debug: msg: "all are true!" when: mylist is all - debug: msg: "at least one is true" when: myotherlist is any .. _path_tests: Testing paths ============= .. note:: In 2.5 the following tests were renamed to remove the ``is_`` prefix The following tests can provide information about a path on the controller:: - debug: msg: "path is a directory" when: mypath is directory - debug: msg: "path is a file" when: mypath is file - debug: msg: "path is a symlink" when: mypath is link - debug: msg: "path already exists" when: mypath is exists - debug: msg: "path is {{ (mypath is abs)|ternary('absolute','relative')}}" - debug: msg: "path is the same file as path2" when: mypath is same_file(path2) - debug: msg: "path is a mount" when: mypath is mount Testing size formats ==================== The ``human_readable`` and ``human_to_bytes`` functions let you test your playbooks to make sure you are using the right size format in your tasks, and that you provide Byte format to computers and human-readable format to people. Human readable -------------- Asserts whether the given string is human readable or not. For example:: - name: "Human Readable" assert: that: - '"1.00 Bytes" == 1|human_readable' - '"1.00 bits" == 1|human_readable(isbits=True)' - '"10.00 KB" == 10240|human_readable' - '"97.66 MB" == 102400000|human_readable' - '"0.10 GB" == 102400000|human_readable(unit="G")' - '"0.10 Gb" == 102400000|human_readable(isbits=True, unit="G")' This would result in:: { "changed": false, "msg": "All assertions passed" } Human to bytes -------------- Returns the given string in the Bytes format. For example:: - name: "Human to Bytes" assert: that: - "{{'0'|human_to_bytes}} == 0" - "{{'0.1'|human_to_bytes}} == 0" - "{{'0.9'|human_to_bytes}} == 1" - "{{'1'|human_to_bytes}} == 1" - "{{'10.00 KB'|human_to_bytes}} == 10240" - "{{ '11 MB'|human_to_bytes}} == 11534336" - "{{ '1.1 GB'|human_to_bytes}} == 1181116006" - "{{'10.00 Kb'|human_to_bytes(isbits=True)}} == 10240" This would result in:: { "changed": false, "msg": "All assertions passed" } .. _test_task_results: Testing task results ==================== The following tasks are illustrative of the tests meant to check the status of tasks:: tasks: - shell: /usr/bin/foo register: result ignore_errors: True - debug: msg: "it failed" when: result is failed # in most cases you'll want a handler, but if you want to do something right now, this is nice - debug: msg: "it changed" when: result is changed - debug: msg: "it succeeded in Ansible >= 2.1" when: result is succeeded - debug: msg: "it succeeded" when: result is success - debug: msg: "it was skipped" when: result is skipped .. note:: From 2.1, you can also use success, failure, change, and skip so that the grammar matches, for those who need to be strict about it. .. _builtin tests: http://jinja.palletsprojects.com/templates/#builtin-tests .. seealso:: :ref:`playbooks_intro` An introduction to playbooks :ref:`playbooks_conditionals` Conditional statements in playbooks :ref:`playbooks_variables` All about variables :ref:`playbooks_loops` Looping in playbooks :ref:`playbooks_reuse_roles` Playbook organization by roles :ref:`playbooks_best_practices` Tips and tricks for playbooks `User Mailing List <https://groups.google.com/group/ansible-devel>`_ Have a question? Stop by the google group! `irc.freenode.net <http://irc.freenode.net>`_ #ansible IRC chat channel
closed
ansible/ansible
https://github.com/ansible/ansible
62,136
whitespace breaks module name parsing in local_action
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY Extra whitespace between a local_action module and the first argument is treated as part of the module name and leads to 'module not found' errors in 2.8.4. (It works in 2.7.0.) ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> ansible-playbook ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.8.4 config file = None configured module search path = [u'/home/centos/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below <nothing> ``` ##### OS / ENVIRONMENT CentOS Linux release 7.6.1810 (Core) ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml - hosts: localhost gather_facts: false tasks: - name: one space local_action: set_fact a="a" - name: more spaces local_action: set_fact b="b" ``` ##### EXPECTED RESULTS all ok; as with 2.7.0: ``` ... PLAY RECAP ***************************************************************************************************************************************************** localhost : ok=3 changed=1 unreachable=0 failed=0 ``` ##### ACTUAL RESULTS fails; note the whitespace after 'set_facts' in the error message ``` PLAY [localhost] *********************************************************************************************************************************************** TASK [one space] *********************************************************************************************************************************************** ok: [localhost -> localhost] TASK [more spaces] ********************************************************************************************************************************************* fatal: [localhost]: FAILED! => {"msg": "The module set_fact was not found in configured module paths. Additionally, core modules are missing. If this is a checkout, run 'git pull --rebase' to correct this problem."} PLAY RECAP ***************************************************************************************************************************************************** localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ```
https://github.com/ansible/ansible/issues/62136
https://github.com/ansible/ansible/pull/71040
701c638757949280c875edc0eb364ee0e63db4bb
74c14c67439ff2d5e24dfb142c7fadf701fb6712
2019-09-11T13:29:28Z
python
2020-08-03T15:30:45Z
changelogs/fragments/62136_strip_spaces_from_action_names.yml
closed
ansible/ansible
https://github.com/ansible/ansible
62,136
whitespace breaks module name parsing in local_action
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY Extra whitespace between a local_action module and the first argument is treated as part of the module name and leads to 'module not found' errors in 2.8.4. (It works in 2.7.0.) ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> ansible-playbook ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.8.4 config file = None configured module search path = [u'/home/centos/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below <nothing> ``` ##### OS / ENVIRONMENT CentOS Linux release 7.6.1810 (Core) ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml - hosts: localhost gather_facts: false tasks: - name: one space local_action: set_fact a="a" - name: more spaces local_action: set_fact b="b" ``` ##### EXPECTED RESULTS all ok; as with 2.7.0: ``` ... PLAY RECAP ***************************************************************************************************************************************************** localhost : ok=3 changed=1 unreachable=0 failed=0 ``` ##### ACTUAL RESULTS fails; note the whitespace after 'set_facts' in the error message ``` PLAY [localhost] *********************************************************************************************************************************************** TASK [one space] *********************************************************************************************************************************************** ok: [localhost -> localhost] TASK [more spaces] ********************************************************************************************************************************************* fatal: [localhost]: FAILED! => {"msg": "The module set_fact was not found in configured module paths. Additionally, core modules are missing. If this is a checkout, run 'git pull --rebase' to correct this problem."} PLAY RECAP ***************************************************************************************************************************************************** localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ```
https://github.com/ansible/ansible/issues/62136
https://github.com/ansible/ansible/pull/71040
701c638757949280c875edc0eb364ee0e63db4bb
74c14c67439ff2d5e24dfb142c7fadf701fb6712
2019-09-11T13:29:28Z
python
2020-08-03T15:30:45Z
lib/ansible/parsing/mod_args.py
# (c) 2014 Michael DeHaan, <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import ansible.constants as C from ansible.errors import AnsibleParserError, AnsibleError, AnsibleAssertionError from ansible.module_utils.six import iteritems, string_types from ansible.module_utils._text import to_text from ansible.parsing.splitter import parse_kv, split_args from ansible.plugins.loader import module_loader, action_loader from ansible.template import Templar from ansible.utils.sentinel import Sentinel # For filtering out modules correctly below FREEFORM_ACTIONS = frozenset(C.MODULE_REQUIRE_ARGS) RAW_PARAM_MODULES = FREEFORM_ACTIONS.union(( 'include', 'include_vars', 'include_tasks', 'include_role', 'import_tasks', 'import_role', 'add_host', 'group_by', 'set_fact', 'meta', )) BUILTIN_TASKS = frozenset(( 'meta', 'include', 'include_tasks', 'include_role', 'import_tasks', 'import_role' )) class ModuleArgsParser: """ There are several ways a module and argument set can be expressed: # legacy form (for a shell command) - action: shell echo hi # common shorthand for local actions vs delegate_to - local_action: shell echo hi # most commonly: - copy: src=a dest=b # legacy form - action: copy src=a dest=b # complex args form, for passing structured data - copy: src: a dest: b # gross, but technically legal - action: module: copy args: src: a dest: b # Standard YAML form for command-type modules. In this case, the args specified # will act as 'defaults' and will be overridden by any args specified # in one of the other formats (complex args under the action, or # parsed from the k=v string - command: 'pwd' args: chdir: '/tmp' This class has some of the logic to canonicalize these into the form - module: <module_name> delegate_to: <optional> args: <args> Args may also be munged for certain shell command parameters. """ def __init__(self, task_ds=None, collection_list=None): task_ds = {} if task_ds is None else task_ds if not isinstance(task_ds, dict): raise AnsibleAssertionError("the type of 'task_ds' should be a dict, but is a %s" % type(task_ds)) self._task_ds = task_ds self._collection_list = collection_list # delayed local imports to prevent circular import from ansible.playbook.task import Task from ansible.playbook.handler import Handler # store the valid Task/Handler attrs for quick access self._task_attrs = set(Task._valid_attrs.keys()) self._task_attrs.update(set(Handler._valid_attrs.keys())) # HACK: why are these not FieldAttributes on task with a post-validate to check usage? self._task_attrs.update(['local_action', 'static']) self._task_attrs = frozenset(self._task_attrs) self.internal_redirect_list = [] def _split_module_string(self, module_string): ''' when module names are expressed like: action: copy src=a dest=b the first part of the string is the name of the module and the rest are strings pertaining to the arguments. ''' tokens = split_args(module_string) if len(tokens) > 1: return (tokens[0], " ".join(tokens[1:])) else: return (tokens[0], "") def _normalize_parameters(self, thing, action=None, additional_args=None): ''' arguments can be fuzzy. Deal with all the forms. ''' additional_args = {} if additional_args is None else additional_args # final args are the ones we'll eventually return, so first update # them with any additional args specified, which have lower priority # than those which may be parsed/normalized next final_args = dict() if additional_args: if isinstance(additional_args, string_types): templar = Templar(loader=None) if templar.is_template(additional_args): final_args['_variable_params'] = additional_args else: raise AnsibleParserError("Complex args containing variables cannot use bare variables (without Jinja2 delimiters), " "and must use the full variable style ('{{var_name}}')") elif isinstance(additional_args, dict): final_args.update(additional_args) else: raise AnsibleParserError('Complex args must be a dictionary or variable string ("{{var}}").') # how we normalize depends if we figured out what the module name is # yet. If we have already figured it out, it's a 'new style' invocation. # otherwise, it's not if action is not None: args = self._normalize_new_style_args(thing, action) else: (action, args) = self._normalize_old_style_args(thing) # this can occasionally happen, simplify if args and 'args' in args: tmp_args = args.pop('args') if isinstance(tmp_args, string_types): tmp_args = parse_kv(tmp_args) args.update(tmp_args) # only internal variables can start with an underscore, so # we don't allow users to set them directly in arguments if args and action not in FREEFORM_ACTIONS: for arg in args: arg = to_text(arg) if arg.startswith('_ansible_'): raise AnsibleError("invalid parameter specified for action '%s': '%s'" % (action, arg)) # finally, update the args we're going to return with the ones # which were normalized above if args: final_args.update(args) return (action, final_args) def _normalize_new_style_args(self, thing, action): ''' deals with fuzziness in new style module invocations accepting key=value pairs and dictionaries, and returns a dictionary of arguments possible example inputs: 'echo hi', 'shell' {'region': 'xyz'}, 'ec2' standardized outputs like: { _raw_params: 'echo hi', _uses_shell: True } ''' if isinstance(thing, dict): # form is like: { xyz: { x: 2, y: 3 } } args = thing elif isinstance(thing, string_types): # form is like: copy: src=a dest=b check_raw = action in FREEFORM_ACTIONS args = parse_kv(thing, check_raw=check_raw) elif thing is None: # this can happen with modules which take no params, like ping: args = None else: raise AnsibleParserError("unexpected parameter type in action: %s" % type(thing), obj=self._task_ds) return args def _normalize_old_style_args(self, thing): ''' deals with fuzziness in old-style (action/local_action) module invocations returns tuple of (module_name, dictionary_args) possible example inputs: { 'shell' : 'echo hi' } 'shell echo hi' {'module': 'ec2', 'x': 1 } standardized outputs like: ('ec2', { 'x': 1} ) ''' action = None args = None if isinstance(thing, dict): # form is like: action: { module: 'copy', src: 'a', dest: 'b' } thing = thing.copy() if 'module' in thing: action, module_args = self._split_module_string(thing['module']) args = thing.copy() check_raw = action in FREEFORM_ACTIONS args.update(parse_kv(module_args, check_raw=check_raw)) del args['module'] elif isinstance(thing, string_types): # form is like: action: copy src=a dest=b (action, args) = self._split_module_string(thing) check_raw = action in FREEFORM_ACTIONS args = parse_kv(args, check_raw=check_raw) else: # need a dict or a string, so giving up raise AnsibleParserError("unexpected parameter type in action: %s" % type(thing), obj=self._task_ds) return (action, args) def parse(self, skip_action_validation=False): ''' Given a task in one of the supported forms, parses and returns returns the action, arguments, and delegate_to values for the task, dealing with all sorts of levels of fuzziness. ''' thing = None action = None delegate_to = self._task_ds.get('delegate_to', Sentinel) args = dict() self.internal_redirect_list = [] # This is the standard YAML form for command-type modules. We grab # the args and pass them in as additional arguments, which can/will # be overwritten via dict updates from the other arg sources below additional_args = self._task_ds.get('args', dict()) # We can have one of action, local_action, or module specified # action if 'action' in self._task_ds: # an old school 'action' statement thing = self._task_ds['action'] action, args = self._normalize_parameters(thing, action=action, additional_args=additional_args) # local_action if 'local_action' in self._task_ds: # local_action is similar but also implies a delegate_to if action is not None: raise AnsibleParserError("action and local_action are mutually exclusive", obj=self._task_ds) thing = self._task_ds.get('local_action', '') delegate_to = 'localhost' action, args = self._normalize_parameters(thing, action=action, additional_args=additional_args) # module: <stuff> is the more new-style invocation # filter out task attributes so we're only querying unrecognized keys as actions/modules non_task_ds = dict((k, v) for k, v in iteritems(self._task_ds) if (k not in self._task_attrs) and (not k.startswith('with_'))) # walk the filtered input dictionary to see if we recognize a module name for item, value in iteritems(non_task_ds): is_action_candidate = False if item in BUILTIN_TASKS: is_action_candidate = True elif skip_action_validation: is_action_candidate = True else: # If the plugin is resolved and redirected smuggle the list of candidate names via the task attribute 'internal_redirect_list' context = action_loader.find_plugin_with_context(item, collection_list=self._collection_list) if not context.resolved: context = module_loader.find_plugin_with_context(item, collection_list=self._collection_list) if context.resolved and context.redirect_list: self.internal_redirect_list = context.redirect_list elif context.redirect_list: self.internal_redirect_list = context.redirect_list is_action_candidate = bool(self.internal_redirect_list) if is_action_candidate: # finding more than one module name is a problem if action is not None: raise AnsibleParserError("conflicting action statements: %s, %s" % (action, item), obj=self._task_ds) action = item thing = value action, args = self._normalize_parameters(thing, action=action, additional_args=additional_args) # if we didn't see any module in the task at all, it's not a task really if action is None: if non_task_ds: # there was one non-task action, but we couldn't find it bad_action = list(non_task_ds.keys())[0] raise AnsibleParserError("couldn't resolve module/action '{0}'. This often indicates a " "misspelling, missing collection, or incorrect module path.".format(bad_action), obj=self._task_ds) else: raise AnsibleParserError("no module/action detected in task.", obj=self._task_ds) elif args.get('_raw_params', '') != '' and action not in RAW_PARAM_MODULES: templar = Templar(loader=None) raw_params = args.pop('_raw_params') if templar.is_template(raw_params): args['_variable_params'] = raw_params else: raise AnsibleParserError("this task '%s' has extra params, which is only allowed in the following modules: %s" % (action, ", ".join(RAW_PARAM_MODULES)), obj=self._task_ds) return (action, args, delegate_to)
closed
ansible/ansible
https://github.com/ansible/ansible
62,136
whitespace breaks module name parsing in local_action
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY Extra whitespace between a local_action module and the first argument is treated as part of the module name and leads to 'module not found' errors in 2.8.4. (It works in 2.7.0.) ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> ansible-playbook ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.8.4 config file = None configured module search path = [u'/home/centos/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below <nothing> ``` ##### OS / ENVIRONMENT CentOS Linux release 7.6.1810 (Core) ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml - hosts: localhost gather_facts: false tasks: - name: one space local_action: set_fact a="a" - name: more spaces local_action: set_fact b="b" ``` ##### EXPECTED RESULTS all ok; as with 2.7.0: ``` ... PLAY RECAP ***************************************************************************************************************************************************** localhost : ok=3 changed=1 unreachable=0 failed=0 ``` ##### ACTUAL RESULTS fails; note the whitespace after 'set_facts' in the error message ``` PLAY [localhost] *********************************************************************************************************************************************** TASK [one space] *********************************************************************************************************************************************** ok: [localhost -> localhost] TASK [more spaces] ********************************************************************************************************************************************* fatal: [localhost]: FAILED! => {"msg": "The module set_fact was not found in configured module paths. Additionally, core modules are missing. If this is a checkout, run 'git pull --rebase' to correct this problem."} PLAY RECAP ***************************************************************************************************************************************************** localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ```
https://github.com/ansible/ansible/issues/62136
https://github.com/ansible/ansible/pull/71040
701c638757949280c875edc0eb364ee0e63db4bb
74c14c67439ff2d5e24dfb142c7fadf701fb6712
2019-09-11T13:29:28Z
python
2020-08-03T15:30:45Z
test/integration/targets/parsing/roles/test_good_parsing/tasks/main.yml
# test code for the ping module # (c) 2014, Michael DeHaan <[email protected]> # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # various tests of things that should not cause parsing problems - set_fact: test_input: "a=1 a=2 a=3" - set_fact: multi_line: | echo old echo mcdonald echo had echo a echo farm - shell: echo "dog" register: result - assert: that: result.cmd == 'echo "dog"' - shell: echo 'dog' register: result - assert: that: result.cmd == 'echo \'dog\'' - name: a quoted argument is not sent to the shell module as anything but a string parameter shell: echo 'dog' 'executable=/usr/bin/python' register: result - debug: var=result.cmd - assert: that: result.cmd == "echo 'dog' 'executable=/usr/bin/python'" - name: it is valid to pass multiple key=value arguments because the shell doesn't check key=value arguments shell: echo quackquack=here quackquack=everywhere register: result - assert: that: result.cmd == 'echo quackquack=here quackquack=everywhere' - name: the same is true with quoting shell: echo "quackquack=here quackquack=everywhere" register: result - assert: that: result.cmd == 'echo "quackquack=here quackquack=everywhere"' - name: the same is true with quoting (B) shell: echo "quackquack=here" "quackquack=everywhere" register: result - name: the same is true with quoting (C) shell: echo "quackquack=here" 'quackquack=everywhere' register: result - name: the same is true with quoting (D) shell: echo "quackquack=here" 'quackquack=everywhere' register: result - name: the same is true with quoting (E) shell: echo {{ test_input }} register: result - assert: that: result.cmd == "echo a=1 a=2 a=3" - name: more shell duplicates shell: echo foo=bar foo=bar register: result - assert: that: result.cmd == "echo foo=bar foo=bar" - name: raw duplicates, noop raw: env true foo=bar foo=bar - name: multi-line inline shell commands (should use script module but hey) are a thing shell: "{{ multi_line }}" register: result - debug: var=result - assert: that: result.stdout_lines == [ 'old', 'mcdonald', 'had', 'a', 'farm' ] - name: passing same arg to shell command is legit shell: echo foo --arg=a --arg=b failed_when: False # just catch the exit code, parse error is what I care about, but should register and compare result register: result - assert: that: # command shouldn't end in spaces, amend test once fixed - result.cmd == "echo foo --arg=a --arg=b" - name: test includes with params include: test_include.yml fact_name=include_params param="{{ test_input }}" - name: assert the include set the correct fact for the param assert: that: - include_params == test_input - name: test includes with quoted params include: test_include.yml fact_name=double_quoted_param param="this is a param with double quotes" - name: assert the include set the correct fact for the double quoted param assert: that: - double_quoted_param == "this is a param with double quotes" - name: test includes with single quoted params include: test_include.yml fact_name=single_quoted_param param='this is a param with single quotes' - name: assert the include set the correct fact for the single quoted param assert: that: - single_quoted_param == "this is a param with single quotes" - name: test includes with quoted params in complex args include: test_include.yml vars: fact_name: complex_param param: "this is a param in a complex arg with double quotes" - name: assert the include set the correct fact for the params in complex args assert: that: - complex_param == "this is a param in a complex arg with double quotes" - name: test variable module name action: "{{ variable_module_name }} msg='this should be debugged'" register: result - name: assert the task with variable module name ran assert: that: - result.msg == "this should be debugged" - name: test conditional includes include: test_include_conditional.yml when: false - name: assert the nested include from test_include_conditional was not set assert: that: - nested_include_var is undefined - name: test omit in complex args set_fact: foo: bar spam: "{{ omit }}" should_not_omit: "prefix{{ omit }}" - assert: that: - foo == 'bar' - spam is undefined - should_not_omit is defined - name: test omit in module args set_fact: > yo=whatsup eggs="{{ omit }}" default_omitted="{{ not_exists|default(omit) }}" should_not_omit_1="prefix{{ omit }}" should_not_omit_2="{{ omit }}suffix" should_not_omit_3="__omit_place_holder__afb6b9bc3d20bfeaa00a1b23a5930f89" - assert: that: - yo == 'whatsup' - eggs is undefined - default_omitted is undefined - should_not_omit_1 is defined - should_not_omit_2 is defined - should_not_omit_3 == "__omit_place_holder__afb6b9bc3d20bfeaa00a1b23a5930f89"
closed
ansible/ansible
https://github.com/ansible/ansible
65,062
Docs: Add examples to common return values page
##### SUMMARY We document the general return values Ansible provides on each playbook run here: https://docs.ansible.com/ansible/devel/reference_appendices/common_return_values.html Add real-world examples for each field. ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME docs.ansible.com ##### ANSIBLE VERSION 2.10 ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A
https://github.com/ansible/ansible/issues/65062
https://github.com/ansible/ansible/pull/71046
991714b9d1e878a4c2fda67ffd829724fa7ac67e
864573a38d109fa5299b57ce2eefd0aa2bbfef5e
2019-11-19T14:53:50Z
python
2020-08-03T16:51:58Z
docs/docsite/rst/reference_appendices/common_return_values.rst
.. _common_return_values: Return Values ------------- .. contents:: Topics Ansible modules normally return a data structure that can be registered into a variable, or seen directly when output by the `ansible` program. Each module can optionally document its own unique return values (visible through ansible-doc and on the :ref:`main docsite<ansible_documentation>`). This document covers return values common to all modules. .. note:: Some of these keys might be set by Ansible itself once it processes the module's return information. Common ^^^^^^ backup_file ``````````` For those modules that implement `backup=no|yes` when manipulating files, a path to the backup file created. changed ``````` A boolean indicating if the task had to make changes. diff ```` Information on differences between the previous and current state. Often a dictionary with entries ``before`` and ``after``, which will then be formatted by the callback plugin to a diff view. failed `````` A boolean that indicates if the task was failed or not. invocation `````````` Information on how the module was invoked. msg ``` A string with a generic message relayed to the user. rc `` Some modules execute command line utilities or are geared for executing commands directly (raw, shell, command, etc), this field contains 'return code' of these utilities. results ``````` If this key exists, it indicates that a loop was present for the task and that it contains a list of the normal module 'result' per item. skipped ``````` A boolean that indicates if the task was skipped or not stderr `````` Some modules execute command line utilities or are geared for executing commands directly (raw, shell, command, etc), this field contains the error output of these utilities. stderr_lines ```````````` When `stderr` is returned we also always provide this field which is a list of strings, one item per line from the original. stdout `````` Some modules execute command line utilities or are geared for executing commands directly (raw, shell, command, etc). This field contains the normal output of these utilities. stdout_lines ```````````` When `stdout` is returned, Ansible always provides a list of strings, each containing one item per line from the original output. .. _internal_return_values: Internal use ^^^^^^^^^^^^ These keys can be added by modules but will be removed from registered variables; they are 'consumed' by Ansible itself. ansible_facts ````````````` This key should contain a dictionary which will be appended to the facts assigned to the host. These will be directly accessible and don't require using a registered variable. exception ````````` This key can contain traceback information caused by an exception in a module. It will only be displayed on high verbosity (-vvv). warnings ```````` This key contains a list of strings that will be presented to the user. deprecations ```````````` This key contains a list of dictionaries that will be presented to the user. Keys of the dictionaries are `msg` and `version`, values are string, value for the `version` key can be an empty string. .. seealso:: :ref:`all_modules` Learn about available modules `GitHub modules directory <https://github.com/ansible/ansible/tree/devel/lib/ansible/modules>`_ Browse source of core and extras modules `Mailing List <https://groups.google.com/group/ansible-devel>`_ Development mailing list `irc.freenode.net <http://irc.freenode.net>`_ #ansible IRC chat channel
closed
ansible/ansible
https://github.com/ansible/ansible
70,923
New changelog categories missing from documentation
<!--- Verify first that your improvement is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below, add suggestions to wording or structure --> Two new changelog categories, `breaking_changes`, and `security_fixes`, were added in #69968 but were not added to the [changelog fragment documentation](https://github.com/ansible/ansible/blob/4cc4cebc97ab1b9c5f0c424238d5352274b5dbad/docs/docsite/rst/community/development_process.rst#creating-a-changelog-fragment). <!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? --> ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME <!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure --> `changelogs/config.yaml` ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below 2.11 ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below default ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. OS version, browser, etc. --> macOS 10.15.6 ##### ADDITIONAL INFORMATION <!--- Describe how this improves the documentation, e.g. before/after situation or screenshots --> Do we need to backport #69968 so we can backport security fixes to those branches and have the changelog build successfully? <!--- HINT: You can paste gist.github.com links for larger files -->
https://github.com/ansible/ansible/issues/70923
https://github.com/ansible/ansible/pull/71027
e10902d744407a0f482aa00d958a25027cfe7129
4f4436c1240d38bd95829fd0fc31e456864dc24a
2020-07-27T17:53:11Z
python
2020-08-03T17:46:41Z
docs/docsite/rst/community/development_process.rst
.. _community_development_process: ***************************** The Ansible Development Cycle ***************************** The Ansible development cycle happens on two levels. At a macro level, the team plans releases and tracks progress with roadmaps and projects. At a micro level, each PR has its own lifecycle. .. contents:: :local: Macro development: roadmaps, releases, and projects =================================================== If you want to follow the conversation about what features will be added to Ansible for upcoming releases and what bugs are being fixed, you can watch these resources: * the :ref:`roadmaps` * the :ref:`Ansible Release Schedule <release_and_maintenance>` * various GitHub `projects <https://github.com/ansible/ansible/projects>`_ - for example: * the `2.10 release project <https://github.com/ansible/ansible/projects/39>`_ * the `network bugs project <https://github.com/ansible/ansible/projects/20>`_ * the `core documentation project <https://github.com/ansible/ansible/projects/27>`_ .. _community_pull_requests: Micro development: the lifecycle of a PR ======================================== Ansible accepts code through **pull requests** ("PRs" for short). GitHub provides a great overview of `how the pull request process works <https://help.github.com/articles/about-pull-requests/>`_ in general. The ultimate goal of any pull request is to get merged and become part of a collection or ansible-base. Here's an overview of the PR lifecycle: * Contributor opens a PR * Ansibot reviews the PR * Ansibot assigns labels * Ansibot pings maintainers * Shippable runs the test suite * Developers, maintainers, community review the PR * Contributor addresses any feedback from reviewers * Developers, maintainers, community re-review * PR merged or closed Automated PR review: ansibullbot -------------------------------- Because Ansible receives many pull requests, and because we love automating things, we've automated several steps of the process of reviewing and merging pull requests with a tool called Ansibullbot, or Ansibot for short. `Ansibullbot <https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md>`_ serves many functions: - Responds quickly to PR submitters to thank them for submitting their PR - Identifies the community maintainer responsible for reviewing PRs for any files affected - Tracks the current status of PRs - Pings responsible parties to remind them of any PR actions for which they may be responsible - Provides maintainers with the ability to move PRs through the workflow - Identifies PRs abandoned by their submitters so that we can close them - Identifies modules abandoned by their maintainers so that we can find new maintainers Ansibot workflow ^^^^^^^^^^^^^^^^ Ansibullbot runs continuously. You can generally expect to see changes to your issue or pull request within thirty minutes. Ansibullbot examines every open pull request in the repositories, and enforces state roughly according to the following workflow: - If a pull request has no workflow labels, it's considered **new**. Files in the pull request are identified, and the maintainers of those files are pinged by the bot, along with instructions on how to review the pull request. (Note: sometimes we strip labels from a pull request to "reboot" this process.) - If the module maintainer is not ``$team_ansible``, the pull request then goes into the **community_review** state. - If the module maintainer is ``$team_ansible``, the pull request then goes into the **core_review** state (and probably sits for a while). - If the pull request is in **community_review** and has received comments from the maintainer: - If the maintainer says ``shipit``, the pull request is labeled **shipit**, whereupon the Core team assesses it for final merge. - If the maintainer says ``needs_info``, the pull request is labeled **needs_info** and the submitter is asked for more info. - If the maintainer says **needs_revision**, the pull request is labeled **needs_revision** and the submitter is asked to fix some things. - If the submitter says ``ready_for_review``, the pull request is put back into **community_review** or **core_review** and the maintainer is notified that the pull request is ready to be reviewed again. - If the pull request is labeled **needs_revision** or **needs_info** and the submitter has not responded lately: - The submitter is first politely pinged after two weeks, pinged again after two more weeks and labeled **pending action**, and the issue or pull request will be closed two weeks after that. - If the submitter responds at all, the clock is reset. - If the pull request is labeled **community_review** and the reviewer has not responded lately: - The reviewer is first politely pinged after two weeks, pinged again after two more weeks and labeled **pending_action**, and then may be reassigned to ``$team_ansible`` or labeled **core_review**, or often the submitter of the pull request is asked to step up as a maintainer. - If Shippable tests fail, or if the code is not able to be merged, the pull request is automatically put into **needs_revision** along with a message to the submitter explaining why. There are corner cases and frequent refinements, but this is the workflow in general. PR labels ^^^^^^^^^ There are two types of PR Labels generally: **workflow** labels and **information** labels. Workflow labels """"""""""""""" - **community_review**: Pull requests for modules that are currently awaiting review by their maintainers in the Ansible community. - **core_review**: Pull requests for modules that are currently awaiting review by their maintainers on the Ansible Core team. - **needs_info**: Waiting on info from the submitter. - **needs_rebase**: Waiting on the submitter to rebase. - **needs_revision**: Waiting on the submitter to make changes. - **shipit**: Waiting for final review by the core team for potential merge. Information labels """""""""""""""""" - **backport**: this is applied automatically if the PR is requested against any branch that is not devel. The bot immediately assigns the labels backport and ``core_review``. - **bugfix_pull_request**: applied by the bot based on the templatized description of the PR. - **cloud**: applied by the bot based on the paths of the modified files. - **docs_pull_request**: applied by the bot based on the templatized description of the PR. - **easyfix**: applied manually, inconsistently used but sometimes useful. - **feature_pull_request**: applied by the bot based on the templatized description of the PR. - **networking**: applied by the bot based on the paths of the modified files. - **owner_pr**: largely deprecated. Formerly workflow, now informational. Originally, PRs submitted by the maintainer would automatically go to **shipit** based on this label. If the submitter is also a maintainer, we notify the other maintainers and still require one of the maintainers (including the submitter) to give a **shipit**. - **pending_action**: applied by the bot to PRs that are not moving. Reviewed every couple of weeks by the community team, who tries to figure out the appropriate action (closure, asking for new maintainers, and so on). Special Labels """""""""""""" - **new_plugin**: this is for new modules or plugins that are not yet in Ansible. **Note:** `new_plugin` kicks off a completely separate process, and frankly it doesn't work very well at present. We're working our best to improve this process. Human PR review --------------- After Ansibot reviews the PR and applies labels, the PR is ready for human review. The most likely reviewers for any PR are the maintainers for the module that PR modifies. Each module has at least one assigned :ref:`maintainer <maintainers>`, listed in the `BOTMETA.yml <https://github.com/ansible/ansible/blob/devel/.github/BOTMETA.yml>`_ file. The maintainer's job is to review PRs that affect that module and decide whether they should be merged (``shipit``) or revised (``needs_revision``). We'd like to have at least one community maintainer for every module. If a module has no community maintainers assigned, the maintainer is listed as ``$team_ansible``. Once a human applies the ``shipit`` label, the :ref:`committers <community_committer_guidelines>` decide whether the PR is ready to be merged. Not every PR that gets the ``shipit`` label is actually ready to be merged, but the better our reviewers are, and the better our guidelines are, the more likely it will be that a PR that reaches **shipit** will be mergeable. Making your PR merge-worthy =========================== We don't merge every PR. Here are some tips for making your PR useful, attractive, and merge-worthy. .. _community_changelogs: Changelogs ---------- Changelogs help users and developers keep up with changes to Ansible. Ansible builds a changelog for each release from fragments. You **must** add a changelog fragment to any PR that changes functionality or fixes a bug in ansible-base. You don't have to add a changelog fragment for PRs that add new modules and plugins, because our tooling does that for you automatically. We build short summary changelogs for minor releases as well as for major releases. If you backport a bugfix, include a changelog fragment with the backport PR. .. _changelogs_how_to: Creating a changelog fragment ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ A basic changelog fragment is a ``.yaml`` file placed in the ``changelogs/fragments/`` directory. Each file contains a yaml dict with keys like ``bugfixes`` or ``major_changes`` followed by a list of changelog entries of bugfixes or features. Each changelog entry is rst embedded inside of the yaml file which means that certain constructs would need to be escaped so they can be interpreted by rst and not by yaml (or escaped for both yaml and rst if that's your desire). Each PR **must** use a new fragment file rather than adding to an existing one, so we can trace the change back to the PR that introduced it. To create a changelog entry, create a new file with a unique name in the ``changelogs/fragments/`` directory of corresponding repository. The file name should include the PR number and a description of the change. It must end with the file extension ``.yaml``. For example: ``40696-user-backup-shadow-file.yaml`` A single changelog fragment may contain multiple sections but most will only contain one section. The toplevel keys (bugfixes, major_changes, and so on) are defined in the `config file <https://github.com/ansible/ansible/blob/devel/changelogs/config.yaml>`_ for our release note tool. Here are the valid sections and a description of each: **major_changes** Major changes to Ansible itself. Generally does not include module or plugin changes. **minor_changes** Minor changes to Ansible, modules, or plugins. This includes new features, new parameters added to modules, or behavior changes to existing parameters. **deprecated_features** Features that have been deprecated and are scheduled for removal in a future release. **removed_features** Features that were previously deprecated and are now removed. **bugfixes** Fixes that resolve issues. **known_issues** Known issues that are currently not fixed or will not be fixed. Each changelog entry must contain a link to its issue between parentheses at the end. If there is no corresponding issue, the entry must contain a link to the PR itself. Most changelog entries will be ``bugfixes`` or ``minor_changes``. When writing a changelog entry that pertains to a particular module, start the entry with ``- [module name] -`` and the following sentence with a lowercase letter. Here are some examples: .. code-block:: yaml bugfixes: - apt_repository - fix crash caused by ``cache.update()`` raising an ``IOError`` due to a timeout in ``apt update`` (https://github.com/ansible/ansible/issues/51995). .. code-block:: yaml minor_changes: - lineinfile - add warning when using an empty regexp (https://github.com/ansible/ansible/issues/29443). .. code-block:: yaml bugfixes: - copy - the module was attempting to change the mode of files for remote_src=True even if mode was not set as a parameter. This failed on filesystems which do not have permission bits (https://github.com/ansible/ansible/issues/29444). You can find more example changelog fragments in the `changelog directory <https://github.com/ansible/ansible/tree/stable-2.9/changelogs/fragments>`_ for the 2.9 release. Once you've written the changelog fragment for your PR, commit the file and include it with the pull request. .. _backport_process: Backporting merged PRs ====================== All Ansible PRs must be merged to the ``devel`` branch first. After a pull request has been accepted and merged to the ``devel`` branch, the following instructions will help you create a pull request to backport the change to a previous stable branch. We do **not** backport features. .. note:: These instructions assume that: * ``stable-2.9`` is the targeted release branch for the backport * ``https://github.com/ansible/ansible.git`` is configured as a ``git remote`` named ``upstream``. If you do not use a ``git remote`` named ``upstream``, adjust the instructions accordingly. * ``https://github.com/<yourgithubaccount>/ansible.git`` is configured as a ``git remote`` named ``origin``. If you do not use a ``git remote`` named ``origin``, adjust the instructions accordingly. #. Prepare your devel, stable, and feature branches: :: git fetch upstream git checkout -b backport/2.9/[PR_NUMBER_FROM_DEVEL] upstream/stable-2.9 #. Cherry pick the relevant commit SHA from the devel branch into your feature branch, handling merge conflicts as necessary: :: git cherry-pick -x [SHA_FROM_DEVEL] #. Add a :ref:`changelog fragment <changelogs_how_to>` for the change, and commit it. #. Push your feature branch to your fork on GitHub: :: git push origin backport/2.9/[PR_NUMBER_FROM_DEVEL] #. Submit the pull request for ``backport/2.9/[PR_NUMBER_FROM_DEVEL]`` against the ``stable-2.9`` branch #. The Release Manager will decide whether to merge the backport PR before the next minor release. There isn't any need to follow up. Just ensure that the automated tests (CI) are green. .. note:: The choice to use ``backport/2.9/[PR_NUMBER_FROM_DEVEL]`` as the name for the feature branch is somewhat arbitrary, but conveys meaning about the purpose of that branch. It is not required to use this format, but it can be helpful, especially when making multiple backport PRs for multiple stable branches. .. note:: If you prefer, you can use CPython's cherry-picker tool (``pip install --user 'cherry-picker >= 1.3.2'``) to backport commits from devel to stable branches in Ansible. Take a look at the `cherry-picker documentation <https://pypi.org/p/cherry-picker#cherry-picking>`_ for details on installing, configuring, and using it.
closed
ansible/ansible
https://github.com/ansible/ansible
68,770
CachePluginAdjudicator should call plugin.flush() when flushing cache
##### SUMMARY The current implementation of CachePluginAdjudicator is unable to flush the cache without loading everything in memory using `load_whole_cache`. Iterating over the plugins cache keys would avoid that step. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible.plugins.cache CachePluginAdjudicator ##### ANSIBLE VERSION ``` ansible 2.9.6 config file = /home/harm/dev/closso-ansible/ansible.cfg configured module search path = ['/home/harm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/harm/.virtualenvs/ansible/lib/python3.7/site-packages/ansible executable location = /home/harm/.virtualenvs/ansible/bin/ansible python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0] ``` ##### CONFIGURATION ``` CACHE_PLUGIN(/home/harm/dev/closso-ansible/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/home/harm/dev/closso-ansible/ansible.cfg) = ./cache/fact/ CACHE_PLUGIN_TIMEOUT(/home/harm/dev/closso-ansible/ansible.cfg) = 86400 DEFAULT_CALLBACK_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/callback', '/usr/share/ansible/plugins/callback'] DEFAULT_CALLBACK_WHITELIST(/home/harm/dev/closso-ansible/ansible.cfg) = ['profile_tasks'] DEFAULT_FILTER_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/filter', '/usr/share/ansible/plugins/filter'] DEFAULT_GATHERING(/home/harm/dev/closso-ansible/ansible.cfg) = smart DEFAULT_HOST_LIST(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/netbox_inventory.yml'] DEFAULT_LOOKUP_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/lookup', '/usr/share/ansible/plugins/lookup'] DEFAULT_ROLES_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/roles'] DEFAULT_VARS_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/vars', '/usr/share/ansible/plugins/vars'] DOC_FRAGMENT_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/doc_fragments', '/usr/share/ansible/plugins/doc_fragments'] HOST_KEY_CHECKING(/home/harm/dev/closso-ansible/ansible.cfg) = False INVENTORY_CACHE_ENABLED(/home/harm/dev/closso-ansible/ansible.cfg) = True INVENTORY_ENABLED(/home/harm/dev/closso-ansible/ansible.cfg) = ['host_list', 'script', 'yaml', 'ini', 'netbox'] TRANSFORM_INVALID_GROUP_CHARS(/home/harm/dev/closso-ansible/ansible.cfg) = never USE_PERSISTENT_CONNECTIONS(/home/harm/dev/closso-ansible/ansible.cfg) = True ``` ##### STEPS TO REPRODUCE ``` >>> cache = CachePluginAdjudicator('jsonfile', _uri='./cache/netbox') >>> cache._plugin.keys() ['netbox_b42bfs_d14a8', 'netbox_b42bfs_81f3c'] >>> cache.keys() dict_keys([]) >>> cache.flush() >>> cache._plugin.keys() ['netbox_b42bfs_d14a8', 'netbox_b42bfs_81f3c'] >>> cache.keys() dict_keys([]) >>> cache.load_whole_cache() >>> cache.keys() dict_keys(['netbox_b42bfs_d14a8', 'netbox_b42bfs_81f3c']) >>> cache.flush() >>> cache.keys() dict_keys([]) >>> cache._plugin.keys() [] ``` ##### EXPECTED RESULTS Cache should be flushed without loading everything in memory. ##### ACTUAL RESULTS `cache.flush()` does nothing.
https://github.com/ansible/ansible/issues/68770
https://github.com/ansible/ansible/pull/70987
8313cc8fb1a6f879328b5178f790086a2ba580b2
7f62a9d7b5164e474da3c933798ac5a41d9394f6
2020-04-08T11:35:50Z
python
2020-08-03T22:16:15Z
changelogs/fragments/68770_cache_adjudicator_flush.yml
closed
ansible/ansible
https://github.com/ansible/ansible
68,770
CachePluginAdjudicator should call plugin.flush() when flushing cache
##### SUMMARY The current implementation of CachePluginAdjudicator is unable to flush the cache without loading everything in memory using `load_whole_cache`. Iterating over the plugins cache keys would avoid that step. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible.plugins.cache CachePluginAdjudicator ##### ANSIBLE VERSION ``` ansible 2.9.6 config file = /home/harm/dev/closso-ansible/ansible.cfg configured module search path = ['/home/harm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/harm/.virtualenvs/ansible/lib/python3.7/site-packages/ansible executable location = /home/harm/.virtualenvs/ansible/bin/ansible python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0] ``` ##### CONFIGURATION ``` CACHE_PLUGIN(/home/harm/dev/closso-ansible/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/home/harm/dev/closso-ansible/ansible.cfg) = ./cache/fact/ CACHE_PLUGIN_TIMEOUT(/home/harm/dev/closso-ansible/ansible.cfg) = 86400 DEFAULT_CALLBACK_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/callback', '/usr/share/ansible/plugins/callback'] DEFAULT_CALLBACK_WHITELIST(/home/harm/dev/closso-ansible/ansible.cfg) = ['profile_tasks'] DEFAULT_FILTER_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/filter', '/usr/share/ansible/plugins/filter'] DEFAULT_GATHERING(/home/harm/dev/closso-ansible/ansible.cfg) = smart DEFAULT_HOST_LIST(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/netbox_inventory.yml'] DEFAULT_LOOKUP_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/lookup', '/usr/share/ansible/plugins/lookup'] DEFAULT_ROLES_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/roles'] DEFAULT_VARS_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/vars', '/usr/share/ansible/plugins/vars'] DOC_FRAGMENT_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/doc_fragments', '/usr/share/ansible/plugins/doc_fragments'] HOST_KEY_CHECKING(/home/harm/dev/closso-ansible/ansible.cfg) = False INVENTORY_CACHE_ENABLED(/home/harm/dev/closso-ansible/ansible.cfg) = True INVENTORY_ENABLED(/home/harm/dev/closso-ansible/ansible.cfg) = ['host_list', 'script', 'yaml', 'ini', 'netbox'] TRANSFORM_INVALID_GROUP_CHARS(/home/harm/dev/closso-ansible/ansible.cfg) = never USE_PERSISTENT_CONNECTIONS(/home/harm/dev/closso-ansible/ansible.cfg) = True ``` ##### STEPS TO REPRODUCE ``` >>> cache = CachePluginAdjudicator('jsonfile', _uri='./cache/netbox') >>> cache._plugin.keys() ['netbox_b42bfs_d14a8', 'netbox_b42bfs_81f3c'] >>> cache.keys() dict_keys([]) >>> cache.flush() >>> cache._plugin.keys() ['netbox_b42bfs_d14a8', 'netbox_b42bfs_81f3c'] >>> cache.keys() dict_keys([]) >>> cache.load_whole_cache() >>> cache.keys() dict_keys(['netbox_b42bfs_d14a8', 'netbox_b42bfs_81f3c']) >>> cache.flush() >>> cache.keys() dict_keys([]) >>> cache._plugin.keys() [] ``` ##### EXPECTED RESULTS Cache should be flushed without loading everything in memory. ##### ACTUAL RESULTS `cache.flush()` does nothing.
https://github.com/ansible/ansible/issues/68770
https://github.com/ansible/ansible/pull/70987
8313cc8fb1a6f879328b5178f790086a2ba580b2
7f62a9d7b5164e474da3c933798ac5a41d9394f6
2020-04-08T11:35:50Z
python
2020-08-03T22:16:15Z
docs/docsite/rst/dev_guide/developing_inventory.rst
.. _developing_inventory: **************************** Developing dynamic inventory **************************** .. contents:: Topics :local: As described in :ref:`dynamic_inventory`, Ansible can pull inventory information from dynamic sources, including cloud sources, using the supplied :ref:`inventory plugins <inventory_plugins>`. If the source you want is not currently covered by existing plugins, you can create your own as with any other plugin type. In previous versions you had to create a script or program that can output JSON in the correct format when invoked with the proper arguments. You can still use and write inventory scripts, as we ensured backwards compatibility via the :ref:`script inventory plugin <script_inventory>` and there is no restriction on the programming language used. If you choose to write a script, however, you will need to implement some features yourself such as caching, configuration management, dynamic variable and group composition, and other features. If you use :ref:`inventory plugins <inventory_plugins>` instead, you can leverage the Ansible codebase to add these common features. .. _inventory_sources: Inventory sources ================= Inventory sources are the input strings that inventory plugins work with. An inventory source can be a path to a file or to a script, or it can be raw data that the plugin can interpret. The table below shows some examples of inventory plugins and the kinds of source you can pass to them with ``-i`` on the command line. +--------------------------------------------+-----------------------------------------+ | Plugin | Source | +--------------------------------------------+-----------------------------------------+ | :ref:`host list <host_list_inventory>` | A comma-separated list of hosts | +--------------------------------------------+-----------------------------------------+ | :ref:`yaml <yaml_inventory>` | Path to a YAML format data file | +--------------------------------------------+-----------------------------------------+ | :ref:`constructed <constructed_inventory>` | Path to a YAML configuration file | +--------------------------------------------+-----------------------------------------+ | :ref:`ini <ini_inventory>` | Path to an INI formatted data file | +--------------------------------------------+-----------------------------------------+ | :ref:`virtualbox <virtualbox_inventory>` | Path to a YAML configuration file | +--------------------------------------------+-----------------------------------------+ | :ref:`script plugin <script_inventory>` | Path to an executable that outputs JSON | +--------------------------------------------+-----------------------------------------+ .. _developing_inventory_inventory_plugins: Inventory plugins ================= Like most plugin types (except modules), inventory plugins must be developed in Python. They execute on the controller and should therefore match the :ref:`control_node_requirements`. Most of the documentation in :ref:`developing_plugins` also applies here. You should read that document first for a general understanding and then come back to this document for specifics on inventory plugins. Inventory plugins normally only execute at the start of a run, before playbooks, plays, and roles are loaded. However, you can use the ``meta: refresh_inventory`` task to clear the current inventory and to execute the inventory plugins again, which will generate a new inventory. If you use the persistent cache, inventory plugins can also use the configured cache plugin to store and retrieve data. This avoids repeating costly external calls. .. _developing_an_inventory_plugin: Developing an inventory plugin ------------------------------ The first thing you want to do is use the base class: .. code-block:: python from ansible.plugins.inventory import BaseInventoryPlugin class InventoryModule(BaseInventoryPlugin): NAME = 'myplugin' # used internally by Ansible, it should match the file name but not required If the inventory plugin is in a collection the NAME should be in the format of 'namespace.collection_name.myplugin'. This class has a couple of methods each plugin should implement and a few helpers for parsing the inventory source and updating the inventory. After you have the basic plugin working you might want to to incorporate other features by adding more base classes: .. code-block:: python from ansible.plugins.inventory import BaseInventoryPlugin, Constructable, Cacheable class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable): NAME = 'myplugin' For the bulk of the work in the plugin, We mostly want to deal with 2 methods ``verify_file`` and ``parse``. .. _inventory_plugin_verify_file: verify_file ^^^^^^^^^^^ This method is used by Ansible to make a quick determination if the inventory source is usable by the plugin. It does not need to be 100% accurate as there might be overlap in what plugins can handle and Ansible will try the enabled plugins (in order) by default. .. code-block:: python def verify_file(self, path): ''' return true/false if this is possibly a valid file for this plugin to consume ''' valid = False if super(InventoryModule, self).verify_file(path): # base class verifies that file exists and is readable by current user if path.endswith(('virtualbox.yaml', 'virtualbox.yml', 'vbox.yaml', 'vbox.yml')): valid = True return valid In this case, from the :ref:`virtualbox inventory plugin <virtualbox_inventory>`, we screen for specific file name patterns to avoid attempting to consume any valid yaml file. You can add any type of condition here, but the most common one is 'extension matching'. If you implement extension matching for YAML configuration files the path suffix <plugin_name>.<yml|yaml> should be accepted. All valid extensions should be documented in the plugin description. Another example that actually does not use a 'file' but the inventory source string itself, from the :ref:`host list <host_list_inventory>` plugin: .. code-block:: python def verify_file(self, path): ''' don't call base class as we don't expect a path, but a host list ''' host_list = path valid = False b_path = to_bytes(host_list, errors='surrogate_or_strict') if not os.path.exists(b_path) and ',' in host_list: # the path does NOT exist and there is a comma to indicate this is a 'host list' valid = True return valid This method is just to expedite the inventory process and avoid unnecessary parsing of sources that are easy to filter out before causing a parse error. .. _inventory_plugin_parse: parse ^^^^^ This method does the bulk of the work in the plugin. It takes the following parameters: * inventory: inventory object with existing data and the methods to add hosts/groups/variables to inventory * loader: Ansible's DataLoader. The DataLoader can read files, auto load JSON/YAML and decrypt vaulted data, and cache read files. * path: string with inventory source (this is usually a path, but is not required) * cache: indicates whether the plugin should use or avoid caches (cache plugin and/or loader) The base class does some minimal assignment for reuse in other methods. .. code-block:: python def parse(self, inventory, loader, path, cache=True): self.loader = loader self.inventory = inventory self.templar = Templar(loader=loader) It is up to the plugin now to deal with the inventory source provided and translate that into the Ansible inventory. To facilitate this, the example below uses a few helper functions: .. code-block:: python NAME = 'myplugin' def parse(self, inventory, loader, path, cache=True): # call base method to ensure properties are available for use with other helper methods super(InventoryModule, self).parse(inventory, loader, path, cache) # this method will parse 'common format' inventory sources and # update any options declared in DOCUMENTATION as needed config = self._read_config_data(path) # if NOT using _read_config_data you should call set_options directly, # to process any defined configuration for this plugin, # if you don't define any options you can skip #self.set_options() # example consuming options from inventory source mysession = apilib.session(user=self.get_option('api_user'), password=self.get_option('api_pass'), server=self.get_option('api_server') ) # make requests to get data to feed into inventory mydata = mysession.getitall() #parse data and create inventory objects: for colo in mydata: for server in mydata[colo]['servers']: self.inventory.add_host(server['name']) self.inventory.set_variable(server['name'], 'ansible_host', server['external_ip']) The specifics will vary depending on API and structure returned. But one thing to keep in mind, if the inventory source or any other issue crops up you should ``raise AnsibleParserError`` to let Ansible know that the source was invalid or the process failed. For examples on how to implement an inventory plugin, see the source code here: `lib/ansible/plugins/inventory <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/inventory>`_. .. _inventory_plugin_caching: inventory cache ^^^^^^^^^^^^^^^ Extend the inventory plugin documentation with the inventory_cache documentation fragment and use the Cacheable base class to have the caching system at your disposal. .. code-block:: yaml extends_documentation_fragment: - inventory_cache .. code-block:: python class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable): NAME = 'myplugin' Next, load the cache plugin specified by the user to read from and update the cache. If your inventory plugin uses YAML based configuration files and the ``_read_config_data`` method, the cache plugin is loaded within that method. If your inventory plugin does not use ``_read_config_data``, you must load the cache explicitly with ``load_cache_plugin``. .. code-block:: python NAME = 'myplugin' def parse(self, inventory, loader, path, cache=True): super(InventoryModule, self).parse(inventory, loader, path) self.load_cache_plugin() Before using the cache, retrieve a unique cache key using the ``get_cache_key`` method. This needs to be done by all inventory modules using the cache, so you don't use/overwrite other parts of the cache. .. code-block:: python def parse(self, inventory, loader, path, cache=True): super(InventoryModule, self).parse(inventory, loader, path) self.load_cache_plugin() cache_key = self.get_cache_key(path) Now that you've enabled caching, loaded the correct plugin, and retrieved a unique cache key, you can set up the flow of data between the cache and your inventory using the ``cache`` parameter of the ``parse`` method. This value comes from the inventory manager and indicates whether the inventory is being refreshed (such as via ``--flush-cache`` or the meta task ``refresh_inventory``). Although the cache shouldn't be used to populate the inventory when being refreshed, the cache should be updated with the new inventory if the user has enabled caching. You can use ``self._cache`` like a dictionary. The following pattern allows refreshing the inventory to work in conjunction with caching. .. code-block:: python def parse(self, inventory, loader, path, cache=True): super(InventoryModule, self).parse(inventory, loader, path) self.load_cache_plugin() cache_key = self.get_cache_key(path) # cache may be True or False at this point to indicate if the inventory is being refreshed # get the user's cache option too to see if we should save the cache if it is changing user_cache_setting = self.get_option('cache') # read if the user has caching enabled and the cache isn't being refreshed attempt_to_read_cache = user_cache_setting and cache # update if the user has caching enabled and the cache is being refreshed; update this value to True if the cache has expired below cache_needs_update = user_cache_setting and not cache # attempt to read the cache if inventory isn't being refreshed and the user has caching enabled if attempt_to_read_cache: try: results = self._cache[cache_key] except KeyError: # This occurs if the cache_key is not in the cache or if the cache_key expired, so the cache needs to be updated cache_needs_update = True if cache_needs_update: results = self.get_inventory() # set the cache self._cache[cache_key] = results self.populate(results) After the ``parse`` method is complete, the contents of ``self._cache`` is used to set the cache plugin if the contents of the cache have changed. You have three other cache methods available: - ``set_cache_plugin`` forces the cache plugin to be set with the contents of ``self._cache`` before the ``parse`` method completes - ``update_cache_if_changed`` sets the cache plugin only if ``self._cache`` has been modified before the ``parse`` method completes - ``clear_cache`` deletes the keys in ``self._cache`` from your cache plugin .. _inventory_source_common_format: Inventory source common format ------------------------------ To simplify development, most plugins use a mostly standard configuration file as the inventory source, YAML based and with just one required field ``plugin`` which should contain the name of the plugin that is expected to consume the file. Depending on other common features used, other fields might be needed, but each plugin can also add its own custom options as needed. For example, if you use the integrated caching, ``cache_plugin``, ``cache_timeout`` and other cache related fields could be present. .. _inventory_development_auto: The 'auto' plugin ----------------- Since Ansible 2.5, we include the :ref:`auto inventory plugin <auto_inventory>` enabled by default, which itself just loads other plugins if they use the common YAML configuration format that specifies a ``plugin`` field that matches an inventory plugin name, this makes it easier to use your plugin w/o having to update configurations. .. _inventory_scripts: .. _developing_inventory_scripts: Inventory scripts ================= Even though we now have inventory plugins, we still support inventory scripts, not only for backwards compatibility but also to allow users to leverage other programming languages. .. _inventory_script_conventions: Inventory script conventions ---------------------------- Inventory scripts must accept the ``--list`` and ``--host <hostname>`` arguments, other arguments are allowed but Ansible will not use them. They might still be useful for when executing the scripts directly. When the script is called with the single argument ``--list``, the script must output to stdout a JSON-encoded hash or dictionary containing all of the groups to be managed. Each group's value should be either a hash or dictionary containing a list of each host, any child groups, and potential group variables, or simply a list of hosts:: { "group001": { "hosts": ["host001", "host002"], "vars": { "var1": true }, "children": ["group002"] }, "group002": { "hosts": ["host003","host004"], "vars": { "var2": 500 }, "children":[] } } If any of the elements of a group are empty they may be omitted from the output. When called with the argument ``--host <hostname>`` (where <hostname> is a host from above), the script must print either an empty JSON hash/dictionary, or a hash/dictionary of variables to make available to templates and playbooks. For example:: { "VAR001": "VALUE", "VAR002": "VALUE", } Printing variables is optional. If the script does not do this, it should print an empty hash or dictionary. .. _inventory_script_tuning: Tuning the external inventory script ------------------------------------ .. versionadded:: 1.3 The stock inventory script system detailed above works for all versions of Ansible, but calling ``--host`` for every host can be rather inefficient, especially if it involves API calls to a remote subsystem. To avoid this inefficiency, if the inventory script returns a top level element called "_meta", it is possible to return all of the host variables in one script execution. When this meta element contains a value for "hostvars", the inventory script will not be invoked with ``--host`` for each host. This results in a significant performance increase for large numbers of hosts. The data to be added to the top level JSON dictionary looks like this:: { # results of inventory script as above go here # ... "_meta": { "hostvars": { "host001": { "var001" : "value" }, "host002": { "var002": "value" } } } } To satisfy the requirements of using ``_meta``, to prevent ansible from calling your inventory with ``--host`` you must at least populate ``_meta`` with an empty ``hostvars`` dictionary. For example:: { # results of inventory script as above go here # ... "_meta": { "hostvars": {} } } .. _replacing_inventory_ini_with_dynamic_provider: If you intend to replace an existing static inventory file with an inventory script, it must return a JSON object which contains an 'all' group that includes every host in the inventory as a member and every group in the inventory as a child. It should also include an 'ungrouped' group which contains all hosts which are not members of any other group. A skeleton example of this JSON object is: .. code-block:: json { "_meta": { "hostvars": {} }, "all": { "children": [ "ungrouped" ] }, "ungrouped": { "children": [ ] } } An easy way to see how this should look is using :ref:`ansible-inventory`, which also supports ``--list`` and ``--host`` parameters like an inventory script would. .. seealso:: :ref:`developing_api` Python API to Playbooks and Ad Hoc Task Execution :ref:`developing_modules_general` Get started with developing a module :ref:`developing_plugins` How to develop plugins `Ansible Tower <https://www.ansible.com/products/tower>`_ REST API endpoint and GUI for Ansible, syncs with dynamic inventory `Development Mailing List <https://groups.google.com/group/ansible-devel>`_ Mailing list for development topics `irc.freenode.net <http://irc.freenode.net>`_ #ansible IRC chat channel
closed
ansible/ansible
https://github.com/ansible/ansible
68,770
CachePluginAdjudicator should call plugin.flush() when flushing cache
##### SUMMARY The current implementation of CachePluginAdjudicator is unable to flush the cache without loading everything in memory using `load_whole_cache`. Iterating over the plugins cache keys would avoid that step. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible.plugins.cache CachePluginAdjudicator ##### ANSIBLE VERSION ``` ansible 2.9.6 config file = /home/harm/dev/closso-ansible/ansible.cfg configured module search path = ['/home/harm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/harm/.virtualenvs/ansible/lib/python3.7/site-packages/ansible executable location = /home/harm/.virtualenvs/ansible/bin/ansible python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0] ``` ##### CONFIGURATION ``` CACHE_PLUGIN(/home/harm/dev/closso-ansible/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/home/harm/dev/closso-ansible/ansible.cfg) = ./cache/fact/ CACHE_PLUGIN_TIMEOUT(/home/harm/dev/closso-ansible/ansible.cfg) = 86400 DEFAULT_CALLBACK_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/callback', '/usr/share/ansible/plugins/callback'] DEFAULT_CALLBACK_WHITELIST(/home/harm/dev/closso-ansible/ansible.cfg) = ['profile_tasks'] DEFAULT_FILTER_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/filter', '/usr/share/ansible/plugins/filter'] DEFAULT_GATHERING(/home/harm/dev/closso-ansible/ansible.cfg) = smart DEFAULT_HOST_LIST(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/netbox_inventory.yml'] DEFAULT_LOOKUP_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/lookup', '/usr/share/ansible/plugins/lookup'] DEFAULT_ROLES_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/roles'] DEFAULT_VARS_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/vars', '/usr/share/ansible/plugins/vars'] DOC_FRAGMENT_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/doc_fragments', '/usr/share/ansible/plugins/doc_fragments'] HOST_KEY_CHECKING(/home/harm/dev/closso-ansible/ansible.cfg) = False INVENTORY_CACHE_ENABLED(/home/harm/dev/closso-ansible/ansible.cfg) = True INVENTORY_ENABLED(/home/harm/dev/closso-ansible/ansible.cfg) = ['host_list', 'script', 'yaml', 'ini', 'netbox'] TRANSFORM_INVALID_GROUP_CHARS(/home/harm/dev/closso-ansible/ansible.cfg) = never USE_PERSISTENT_CONNECTIONS(/home/harm/dev/closso-ansible/ansible.cfg) = True ``` ##### STEPS TO REPRODUCE ``` >>> cache = CachePluginAdjudicator('jsonfile', _uri='./cache/netbox') >>> cache._plugin.keys() ['netbox_b42bfs_d14a8', 'netbox_b42bfs_81f3c'] >>> cache.keys() dict_keys([]) >>> cache.flush() >>> cache._plugin.keys() ['netbox_b42bfs_d14a8', 'netbox_b42bfs_81f3c'] >>> cache.keys() dict_keys([]) >>> cache.load_whole_cache() >>> cache.keys() dict_keys(['netbox_b42bfs_d14a8', 'netbox_b42bfs_81f3c']) >>> cache.flush() >>> cache.keys() dict_keys([]) >>> cache._plugin.keys() [] ``` ##### EXPECTED RESULTS Cache should be flushed without loading everything in memory. ##### ACTUAL RESULTS `cache.flush()` does nothing.
https://github.com/ansible/ansible/issues/68770
https://github.com/ansible/ansible/pull/70987
8313cc8fb1a6f879328b5178f790086a2ba580b2
7f62a9d7b5164e474da3c933798ac5a41d9394f6
2020-04-08T11:35:50Z
python
2020-08-03T22:16:15Z
docs/docsite/rst/porting_guides/porting_guide_base_2.11.rst
.. _porting_2.11_guide_base: ******************************* Ansible-base 2.11 Porting Guide ******************************* This section discusses the behavioral changes between Ansible-base 2.10 and Ansible-base 2.11. It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible-base. We suggest you read this page along with the `Ansible-base Changelog for 2.11 <https://github.com/ansible/ansible/blob/stable-2.11/changelogs/CHANGELOG-v2.11.rst>`_ to understand what updates you may need to make. Ansible-base is mainly of interest for developers and users who only want to use a small, controlled subset of the available collections. Regular users should install ansible. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`. .. contents:: Playbook ======== No notable changes Command Line ============ No notable changes Deprecated ========== No notable changes Modules ======= Change to Default File Permissions ---------------------------------- To address CVE-2020-1736, the default permissions for certain files created by Ansible using ``atomic_move()`` were changed from ``0o666`` to ``0o600``. The default permissions value was only used for the temporary file before it was moved into its place or newly created files. If the file existed when the new temporary file was moved into place, Ansible would use the permissions of the existing file. If there was no existing file, Ansible would retain the default file permissions, combined with the system ``umask``, of the temporary file. Most modules that call ``atomic_move()`` also call ``set_fs_attributes_if_different()`` or ``set_mode_if_different()``, which will set the permissions of the file to what is specified in the task. A new warning will be displayed when all of the following conditions are true: - The file at the final destination, not the temporary file, does not exist - A module supports setting ``mode`` but it was not specified for the task - The module calls ``atomic_move()`` but does not later call ``set_fs_attributes_if_different()`` or ``set_mode_if_different()`` with a ``mode`` specified The following modules call ``atomic_move()`` but do not call ``set_fs_attributes_if_different()`` or ``set_mode_if_different()`` and do not support setting ``mode``. This means for files they create, the default permissions have changed and there is no indication: - M(known_hosts) - M(service) Code Audit ~~~~~~~~~~ The code was audited for modules that use ``atomic_move()`` but **do not** later call ``set_fs_attributes_if_different()`` or ``set_mode_if_different()``. Modules that provide no means for specifying the ``mode`` will not display a warning message since there is no way for the playbook author to remove the warning. The behavior of each module with regards to the default permissions of temporary files and the permissions of newly created files is explained below. known_hosts ^^^^^^^^^^^ The M(known_hosts) module uses ``atomic_move()`` to operate on the ``known_hosts`` file specified by the ``path`` parameter in the module. It creates a temporary file using ``tempfile.NamedTemporaryFile()`` which creates a temporary file that is readable and writable only by the creating user ID. service ^^^^^^^ The M(service) module uses ``atomic_move()`` to operate on the default rc file, which is the first found of ``/etc/rc.conf``, ``/etc/rc.conf.local``, and ``/usr/local/etc/rc.conf``. Since these files almost always exist on the target system, they will not be created and the existing permissions of the file will be used. **The following modules were included in Ansible <= 2.9. They have moved to collections but are documented here for completeness.** authorized_key ^^^^^^^^^^^^^^ The M(authorized_key) module uses ``atomic_move()`` to operate on the the ``authorized_key`` file. A temporary file is created with ``tempfile.mkstemp()`` before being moved into place. The temporary file is readable and writable only by the creating user ID. The M(authorized_key) module manages the permissions of the the ``.ssh`` direcotry and ``authorized_keys`` files if ``managed_dirs`` is set to ``True``, which is the default. The module sets the ``ssh`` directory owner and group to the ``uid`` and ``gid`` of the user specified in the ``user`` parameter and directory permissions to ``700``. The module sets the ``authorized_key`` file owner and group to the ``uid`` and ``gid`` of the user specified in the ``user`` parameter and file permissions to ``600``. These values cannot be controlled by module parameters. interfaces_file ^^^^^^^^^^^^^^^ The M(interfaces_file) module uses ``atomic_move()`` to operate on ``/etc/network/serivces`` or the ``dest`` specified by the module. A temporary file is created with ``tempfile.mkstemp()`` before being moved into place. The temporary file is readable and writable only by the creating user ID. If the file specified by ``path`` does not exist it will retain the permissions of the temporary file once moved into place. pam_limits ^^^^^^^^^^ The M(pam_limits) module uses ``atomic_move()`` to operate on ``/etc/security/limits.conf`` or the value of ``dest``. A temporary file is created using ``tempfile.NamedTemporaryFile()``, which is only readable and writable by the creating user ID. The temporary file will inherit the permissions of the file specified by ``dest``, or it will retain the permissions that only allow the creating user ID to read and write the file. pamd ^^^^ The M(pamd) module uses ``atomic_move()`` to operate on a file in ``/etc/pam.d``. The path and the file can be specified by setting the ``path`` and ``name`` parameters. A temporary file is created using ``tempfile.NamedTemporaryFile()``, which is only readable and writable by the creating user ID. The temporary file will inherit the permissions of the file located at ``[dest]/[name]``, or it will retain the permissions of the temporary file that only allow the creating user ID to read and write the file. redhat_subscription ^^^^^^^^^^^^^^^^^^^ The M(redhat_subscription) module uses ``atomic_move()`` to operate on ``/etc/yum/pluginconf.d/rhnplugin.conf`` and ``/etc/yum/pluginconf.d/subscription-manager.conf``. A temporary file is created with ``tempfile.mkstemp()`` before being moved into place. The temporary file is readable and writable only by the creating user ID and the temporary file will inherit the permissions of the existing file once it is moved in to place. selinux ^^^^^^^ The M(selinux) module uses ``atomic_move()`` to operate on ``/etc/selinux/config`` on the value specified by ``configfile``. The module will fail if ``configfile`` does not exist before any temporary data is written to disk. A temporary file is created with ``tempfile.mkstemp()`` before being moved into place. The temporary file is readable and writable only by the creating user ID. Since the file specified by ``configfile`` must exist, the temporary file will inherit the permissions of that file once it is moved in to place. sysctl ^^^^^^ The M(sysctl) module uses ``atomic_move()`` to operate on ``/etc/sysctl.conf`` or the value specified by ``sysctl_file``. The module will fail if ``sysctl_file`` does not exist before any temporary data is written to disk. A temporary file is created with ``tempfile.mkstemp()`` before being moved into place. The temporary file is readable and writable only by the creating user ID. Since the file specified by ``sysctl_file`` must exist, the temporary file will inherit the permissions of that file once it is moved in to place. * The ``apt_key`` module has explicitly defined ``file`` as mutually exclusive with ``data``, ``keyserver`` and ``url``. They cannot be used together anymore. Modules removed --------------- The following modules no longer exist: * No notable changes Deprecation notices ------------------- No notable changes Noteworthy module changes ------------------------- * facts - ``ansible_virtualization_type`` now tries to report a more accurate result than ``xen`` when virtualized and not running on Xen. Plugins ======= No notable changes Porting custom scripts ====================== No notable changes Networking ========== No notable changes
closed
ansible/ansible
https://github.com/ansible/ansible
68,770
CachePluginAdjudicator should call plugin.flush() when flushing cache
##### SUMMARY The current implementation of CachePluginAdjudicator is unable to flush the cache without loading everything in memory using `load_whole_cache`. Iterating over the plugins cache keys would avoid that step. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible.plugins.cache CachePluginAdjudicator ##### ANSIBLE VERSION ``` ansible 2.9.6 config file = /home/harm/dev/closso-ansible/ansible.cfg configured module search path = ['/home/harm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/harm/.virtualenvs/ansible/lib/python3.7/site-packages/ansible executable location = /home/harm/.virtualenvs/ansible/bin/ansible python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0] ``` ##### CONFIGURATION ``` CACHE_PLUGIN(/home/harm/dev/closso-ansible/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/home/harm/dev/closso-ansible/ansible.cfg) = ./cache/fact/ CACHE_PLUGIN_TIMEOUT(/home/harm/dev/closso-ansible/ansible.cfg) = 86400 DEFAULT_CALLBACK_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/callback', '/usr/share/ansible/plugins/callback'] DEFAULT_CALLBACK_WHITELIST(/home/harm/dev/closso-ansible/ansible.cfg) = ['profile_tasks'] DEFAULT_FILTER_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/filter', '/usr/share/ansible/plugins/filter'] DEFAULT_GATHERING(/home/harm/dev/closso-ansible/ansible.cfg) = smart DEFAULT_HOST_LIST(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/netbox_inventory.yml'] DEFAULT_LOOKUP_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/lookup', '/usr/share/ansible/plugins/lookup'] DEFAULT_ROLES_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/roles'] DEFAULT_VARS_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/vars', '/usr/share/ansible/plugins/vars'] DOC_FRAGMENT_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/doc_fragments', '/usr/share/ansible/plugins/doc_fragments'] HOST_KEY_CHECKING(/home/harm/dev/closso-ansible/ansible.cfg) = False INVENTORY_CACHE_ENABLED(/home/harm/dev/closso-ansible/ansible.cfg) = True INVENTORY_ENABLED(/home/harm/dev/closso-ansible/ansible.cfg) = ['host_list', 'script', 'yaml', 'ini', 'netbox'] TRANSFORM_INVALID_GROUP_CHARS(/home/harm/dev/closso-ansible/ansible.cfg) = never USE_PERSISTENT_CONNECTIONS(/home/harm/dev/closso-ansible/ansible.cfg) = True ``` ##### STEPS TO REPRODUCE ``` >>> cache = CachePluginAdjudicator('jsonfile', _uri='./cache/netbox') >>> cache._plugin.keys() ['netbox_b42bfs_d14a8', 'netbox_b42bfs_81f3c'] >>> cache.keys() dict_keys([]) >>> cache.flush() >>> cache._plugin.keys() ['netbox_b42bfs_d14a8', 'netbox_b42bfs_81f3c'] >>> cache.keys() dict_keys([]) >>> cache.load_whole_cache() >>> cache.keys() dict_keys(['netbox_b42bfs_d14a8', 'netbox_b42bfs_81f3c']) >>> cache.flush() >>> cache.keys() dict_keys([]) >>> cache._plugin.keys() [] ``` ##### EXPECTED RESULTS Cache should be flushed without loading everything in memory. ##### ACTUAL RESULTS `cache.flush()` does nothing.
https://github.com/ansible/ansible/issues/68770
https://github.com/ansible/ansible/pull/70987
8313cc8fb1a6f879328b5178f790086a2ba580b2
7f62a9d7b5164e474da3c933798ac5a41d9394f6
2020-04-08T11:35:50Z
python
2020-08-03T22:16:15Z
lib/ansible/plugins/cache/__init__.py
# (c) 2014, Michael DeHaan <[email protected]> # (c) 2018, Ansible Project # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. from __future__ import (absolute_import, division, print_function) __metaclass__ = type import copy import os import time import errno from abc import ABCMeta, abstractmethod from ansible import constants as C from ansible.errors import AnsibleError from ansible.module_utils.six import with_metaclass from ansible.module_utils._text import to_bytes, to_text from ansible.module_utils.common._collections_compat import MutableMapping from ansible.plugins import AnsiblePlugin from ansible.plugins.loader import cache_loader from ansible.utils.collection_loader import resource_from_fqcr from ansible.utils.display import Display from ansible.vars.fact_cache import FactCache as RealFactCache display = Display() class FactCache(RealFactCache): """ This is for backwards compatibility. Will be removed after deprecation. It was removed as it wasn't actually part of the cache plugin API. It's actually the code to make use of cache plugins, not the cache plugin itself. Subclassing it wouldn't yield a usable Cache Plugin and there was no facility to use it as anything else. """ def __init__(self, *args, **kwargs): display.deprecated('ansible.plugins.cache.FactCache has been moved to' ' ansible.vars.fact_cache.FactCache. If you are looking for the class' ' to subclass for a cache plugin, you want' ' ansible.plugins.cache.BaseCacheModule or one of its subclasses.', version='2.12', collection_name='ansible.builtin') super(FactCache, self).__init__(*args, **kwargs) class BaseCacheModule(AnsiblePlugin): # Backwards compat only. Just import the global display instead _display = display def __init__(self, *args, **kwargs): # Third party code is not using cache_loader to load plugin - fall back to previous behavior if not hasattr(self, '_load_name'): display.deprecated('Rather than importing custom CacheModules directly, use ansible.plugins.loader.cache_loader', version='2.14', collection_name='ansible.builtin') self._load_name = self.__module__.split('.')[-1] self._load_name = resource_from_fqcr(self.__module__) super(BaseCacheModule, self).__init__() self.set_options(var_options=args, direct=kwargs) @abstractmethod def get(self, key): pass @abstractmethod def set(self, key, value): pass @abstractmethod def keys(self): pass @abstractmethod def contains(self, key): pass @abstractmethod def delete(self, key): pass @abstractmethod def flush(self): pass @abstractmethod def copy(self): pass class BaseFileCacheModule(BaseCacheModule): """ A caching module backed by file based storage. """ def __init__(self, *args, **kwargs): try: super(BaseFileCacheModule, self).__init__(*args, **kwargs) self._cache_dir = self._get_cache_connection(self.get_option('_uri')) self._timeout = float(self.get_option('_timeout')) except KeyError: self._cache_dir = self._get_cache_connection(C.CACHE_PLUGIN_CONNECTION) self._timeout = float(C.CACHE_PLUGIN_TIMEOUT) self.plugin_name = resource_from_fqcr(self.__module__) self._cache = {} self.validate_cache_connection() def _get_cache_connection(self, source): if source: try: return os.path.expanduser(os.path.expandvars(source)) except TypeError: pass def validate_cache_connection(self): if not self._cache_dir: raise AnsibleError("error, '%s' cache plugin requires the 'fact_caching_connection' config option " "to be set (to a writeable directory path)" % self.plugin_name) if not os.path.exists(self._cache_dir): try: os.makedirs(self._cache_dir) except (OSError, IOError) as e: raise AnsibleError("error in '%s' cache plugin while trying to create cache dir %s : %s" % (self.plugin_name, self._cache_dir, to_bytes(e))) else: for x in (os.R_OK, os.W_OK, os.X_OK): if not os.access(self._cache_dir, x): raise AnsibleError("error in '%s' cache, configured path (%s) does not have necessary permissions (rwx), disabling plugin" % ( self.plugin_name, self._cache_dir)) def _get_cache_file_name(self, key): prefix = self.get_option('_prefix') if prefix: cachefile = "%s/%s%s" % (self._cache_dir, prefix, key) else: cachefile = "%s/%s" % (self._cache_dir, key) return cachefile def get(self, key): """ This checks the in memory cache first as the fact was not expired at 'gather time' and it would be problematic if the key did expire after some long running tasks and user gets 'undefined' error in the same play """ if key not in self._cache: if self.has_expired(key) or key == "": raise KeyError cachefile = self._get_cache_file_name(key) try: value = self._load(cachefile) self._cache[key] = value except ValueError as e: display.warning("error in '%s' cache plugin while trying to read %s : %s. " "Most likely a corrupt file, so erasing and failing." % (self.plugin_name, cachefile, to_bytes(e))) self.delete(key) raise AnsibleError("The cache file %s was corrupt, or did not otherwise contain valid data. " "It has been removed, so you can re-run your command now." % cachefile) except (OSError, IOError) as e: display.warning("error in '%s' cache plugin while trying to read %s : %s" % (self.plugin_name, cachefile, to_bytes(e))) raise KeyError except Exception as e: raise AnsibleError("Error while decoding the cache file %s: %s" % (cachefile, to_bytes(e))) return self._cache.get(key) def set(self, key, value): self._cache[key] = value cachefile = self._get_cache_file_name(key) try: self._dump(value, cachefile) except (OSError, IOError) as e: display.warning("error in '%s' cache plugin while trying to write to %s : %s" % (self.plugin_name, cachefile, to_bytes(e))) def has_expired(self, key): if self._timeout == 0: return False cachefile = self._get_cache_file_name(key) try: st = os.stat(cachefile) except (OSError, IOError) as e: if e.errno == errno.ENOENT: return False else: display.warning("error in '%s' cache plugin while trying to stat %s : %s" % (self.plugin_name, cachefile, to_bytes(e))) return False if time.time() - st.st_mtime <= self._timeout: return False if key in self._cache: del self._cache[key] return True def keys(self): keys = [] for k in os.listdir(self._cache_dir): if not (k.startswith('.') or self.has_expired(k)): keys.append(k) return keys def contains(self, key): cachefile = self._get_cache_file_name(key) if key in self._cache: return True if self.has_expired(key): return False try: os.stat(cachefile) return True except (OSError, IOError) as e: if e.errno == errno.ENOENT: return False else: display.warning("error in '%s' cache plugin while trying to stat %s : %s" % (self.plugin_name, cachefile, to_bytes(e))) def delete(self, key): try: del self._cache[key] except KeyError: pass try: os.remove(self._get_cache_file_name(key)) except (OSError, IOError): pass # TODO: only pass on non existing? def flush(self): self._cache = {} for key in self.keys(): self.delete(key) def copy(self): ret = dict() for key in self.keys(): ret[key] = self.get(key) return ret @abstractmethod def _load(self, filepath): """ Read data from a filepath and return it as a value :arg filepath: The filepath to read from. :returns: The value stored in the filepath This method reads from the file on disk and takes care of any parsing and transformation of the data before returning it. The value returned should be what Ansible would expect if it were uncached data. .. note:: Filehandles have advantages but calling code doesn't know whether this file is text or binary, should be decoded, or accessed via a library function. Therefore the API uses a filepath and opens the file inside of the method. """ pass @abstractmethod def _dump(self, value, filepath): """ Write data to a filepath :arg value: The value to store :arg filepath: The filepath to store it at """ pass class CachePluginAdjudicator(MutableMapping): """ Intermediary between a cache dictionary and a CacheModule """ def __init__(self, plugin_name='memory', **kwargs): self._cache = {} self._retrieved = {} self._plugin = cache_loader.get(plugin_name, **kwargs) if not self._plugin: raise AnsibleError('Unable to load the cache plugin (%s).' % plugin_name) self._plugin_name = plugin_name def update_cache_if_changed(self): if self._retrieved != self._cache: self.set_cache() def set_cache(self): for top_level_cache_key in self._cache.keys(): self._plugin.set(top_level_cache_key, self._cache[top_level_cache_key]) self._retrieved = copy.deepcopy(self._cache) def load_whole_cache(self): for key in self._plugin.keys(): self._cache[key] = self._plugin.get(key) def __repr__(self): return to_text(self._cache) def __iter__(self): return iter(self.keys()) def __len__(self): return len(self.keys()) def _do_load_key(self, key): load = False if key not in self._cache and key not in self._retrieved and self._plugin_name != 'memory': if isinstance(self._plugin, BaseFileCacheModule): load = True elif not isinstance(self._plugin, BaseFileCacheModule) and self._plugin.contains(key): # Database-backed caches don't raise KeyError for expired keys, so only load if the key is valid by checking contains() load = True return load def __getitem__(self, key): if self._do_load_key(key): try: self._cache[key] = self._plugin.get(key) except KeyError: pass else: self._retrieved[key] = self._cache[key] return self._cache[key] def get(self, key, default=None): if self._do_load_key(key): try: self._cache[key] = self._plugin.get(key) except KeyError as e: pass else: self._retrieved[key] = self._cache[key] return self._cache.get(key, default) def items(self): return self._cache.items() def values(self): return self._cache.values() def keys(self): return self._cache.keys() def pop(self, key, *args): if args: return self._cache.pop(key, args[0]) return self._cache.pop(key) def __delitem__(self, key): del self._cache[key] def __setitem__(self, key, value): self._cache[key] = value def flush(self): for key in self._cache.keys(): self._plugin.delete(key) self._cache = {} def update(self, value): self._cache.update(value)
closed
ansible/ansible
https://github.com/ansible/ansible
68,770
CachePluginAdjudicator should call plugin.flush() when flushing cache
##### SUMMARY The current implementation of CachePluginAdjudicator is unable to flush the cache without loading everything in memory using `load_whole_cache`. Iterating over the plugins cache keys would avoid that step. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible.plugins.cache CachePluginAdjudicator ##### ANSIBLE VERSION ``` ansible 2.9.6 config file = /home/harm/dev/closso-ansible/ansible.cfg configured module search path = ['/home/harm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/harm/.virtualenvs/ansible/lib/python3.7/site-packages/ansible executable location = /home/harm/.virtualenvs/ansible/bin/ansible python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0] ``` ##### CONFIGURATION ``` CACHE_PLUGIN(/home/harm/dev/closso-ansible/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/home/harm/dev/closso-ansible/ansible.cfg) = ./cache/fact/ CACHE_PLUGIN_TIMEOUT(/home/harm/dev/closso-ansible/ansible.cfg) = 86400 DEFAULT_CALLBACK_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/callback', '/usr/share/ansible/plugins/callback'] DEFAULT_CALLBACK_WHITELIST(/home/harm/dev/closso-ansible/ansible.cfg) = ['profile_tasks'] DEFAULT_FILTER_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/filter', '/usr/share/ansible/plugins/filter'] DEFAULT_GATHERING(/home/harm/dev/closso-ansible/ansible.cfg) = smart DEFAULT_HOST_LIST(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/netbox_inventory.yml'] DEFAULT_LOOKUP_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/lookup', '/usr/share/ansible/plugins/lookup'] DEFAULT_ROLES_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/roles'] DEFAULT_VARS_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/vars', '/usr/share/ansible/plugins/vars'] DOC_FRAGMENT_PLUGIN_PATH(/home/harm/dev/closso-ansible/ansible.cfg) = ['/home/harm/dev/closso-ansible/plugins/doc_fragments', '/usr/share/ansible/plugins/doc_fragments'] HOST_KEY_CHECKING(/home/harm/dev/closso-ansible/ansible.cfg) = False INVENTORY_CACHE_ENABLED(/home/harm/dev/closso-ansible/ansible.cfg) = True INVENTORY_ENABLED(/home/harm/dev/closso-ansible/ansible.cfg) = ['host_list', 'script', 'yaml', 'ini', 'netbox'] TRANSFORM_INVALID_GROUP_CHARS(/home/harm/dev/closso-ansible/ansible.cfg) = never USE_PERSISTENT_CONNECTIONS(/home/harm/dev/closso-ansible/ansible.cfg) = True ``` ##### STEPS TO REPRODUCE ``` >>> cache = CachePluginAdjudicator('jsonfile', _uri='./cache/netbox') >>> cache._plugin.keys() ['netbox_b42bfs_d14a8', 'netbox_b42bfs_81f3c'] >>> cache.keys() dict_keys([]) >>> cache.flush() >>> cache._plugin.keys() ['netbox_b42bfs_d14a8', 'netbox_b42bfs_81f3c'] >>> cache.keys() dict_keys([]) >>> cache.load_whole_cache() >>> cache.keys() dict_keys(['netbox_b42bfs_d14a8', 'netbox_b42bfs_81f3c']) >>> cache.flush() >>> cache.keys() dict_keys([]) >>> cache._plugin.keys() [] ``` ##### EXPECTED RESULTS Cache should be flushed without loading everything in memory. ##### ACTUAL RESULTS `cache.flush()` does nothing.
https://github.com/ansible/ansible/issues/68770
https://github.com/ansible/ansible/pull/70987
8313cc8fb1a6f879328b5178f790086a2ba580b2
7f62a9d7b5164e474da3c933798ac5a41d9394f6
2020-04-08T11:35:50Z
python
2020-08-03T22:16:15Z
test/units/plugins/cache/test_cache.py
# (c) 2012-2015, Michael DeHaan <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type from units.compat import unittest, mock from ansible.errors import AnsibleError from ansible.plugins.cache import FactCache, CachePluginAdjudicator from ansible.plugins.cache.base import BaseCacheModule from ansible.plugins.cache.memory import CacheModule as MemoryCache from ansible.plugins.loader import cache_loader import pytest class TestCachePluginAdjudicator: # memory plugin cache cache = CachePluginAdjudicator() cache['cache_key'] = {'key1': 'value1', 'key2': 'value2'} cache['cache_key_2'] = {'key': 'value'} def test___setitem__(self): self.cache['new_cache_key'] = {'new_key1': ['new_value1', 'new_value2']} assert self.cache['new_cache_key'] == {'new_key1': ['new_value1', 'new_value2']} def test_inner___setitem__(self): self.cache['new_cache_key'] = {'new_key1': ['new_value1', 'new_value2']} self.cache['new_cache_key']['new_key1'][0] = 'updated_value1' assert self.cache['new_cache_key'] == {'new_key1': ['updated_value1', 'new_value2']} def test___contains__(self): assert 'cache_key' in self.cache assert 'not_cache_key' not in self.cache def test_get(self): assert self.cache.get('cache_key') == {'key1': 'value1', 'key2': 'value2'} def test_get_with_default(self): assert self.cache.get('foo', 'bar') == 'bar' def test_get_without_default(self): assert self.cache.get('foo') is None def test___getitem__(self): with pytest.raises(KeyError) as err: self.cache['foo'] def test_pop_with_default(self): assert self.cache.pop('foo', 'bar') == 'bar' def test_pop_without_default(self): with pytest.raises(KeyError) as err: assert self.cache.pop('foo') def test_pop(self): v = self.cache.pop('cache_key_2') assert v == {'key': 'value'} assert 'cache_key_2' not in self.cache def test_update(self): self.cache.update({'cache_key': {'key2': 'updatedvalue'}}) assert self.cache['cache_key']['key2'] == 'updatedvalue' class TestFactCache(unittest.TestCase): def setUp(self): with mock.patch('ansible.constants.CACHE_PLUGIN', 'memory'): self.cache = FactCache() def test_copy(self): self.cache['avocado'] = 'fruit' self.cache['daisy'] = 'flower' a_copy = self.cache.copy() self.assertEqual(type(a_copy), dict) self.assertEqual(a_copy, dict(avocado='fruit', daisy='flower')) def test_plugin_load_failure(self): # See https://github.com/ansible/ansible/issues/18751 # Note no fact_connection config set, so this will fail with mock.patch('ansible.constants.CACHE_PLUGIN', 'json'): self.assertRaisesRegexp(AnsibleError, "Unable to load the facts cache plugin.*json.*", FactCache) def test_update(self): self.cache.update({'cache_key': {'key2': 'updatedvalue'}}) assert self.cache['cache_key']['key2'] == 'updatedvalue' def test_update_legacy(self): self.cache.update('cache_key', {'key2': 'updatedvalue'}) assert self.cache['cache_key']['key2'] == 'updatedvalue' def test_update_legacy_key_exists(self): self.cache['cache_key'] = {'key': 'value', 'key2': 'value2'} self.cache.update('cache_key', {'key': 'updatedvalue'}) assert self.cache['cache_key']['key'] == 'updatedvalue' assert self.cache['cache_key']['key2'] == 'value2' class TestAbstractClass(unittest.TestCase): def setUp(self): pass def tearDown(self): pass def test_subclass_error(self): class CacheModule1(BaseCacheModule): pass with self.assertRaises(TypeError): CacheModule1() # pylint: disable=abstract-class-instantiated class CacheModule2(BaseCacheModule): def get(self, key): super(CacheModule2, self).get(key) with self.assertRaises(TypeError): CacheModule2() # pylint: disable=abstract-class-instantiated def test_subclass_success(self): class CacheModule3(BaseCacheModule): def get(self, key): super(CacheModule3, self).get(key) def set(self, key, value): super(CacheModule3, self).set(key, value) def keys(self): super(CacheModule3, self).keys() def contains(self, key): super(CacheModule3, self).contains(key) def delete(self, key): super(CacheModule3, self).delete(key) def flush(self): super(CacheModule3, self).flush() def copy(self): super(CacheModule3, self).copy() self.assertIsInstance(CacheModule3(), CacheModule3) def test_memory_cachemodule(self): self.assertIsInstance(MemoryCache(), MemoryCache) def test_memory_cachemodule_with_loader(self): self.assertIsInstance(cache_loader.get('memory'), MemoryCache)
closed
ansible/ansible
https://github.com/ansible/ansible
70,429
Download of SCM collections gives traceback
##### SUMMARY Compatibility of `ansible-galaxy collection download` with collections from source control is unclear, but attempting it gives a traceback. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lib/ansible/cli/galaxy.py ##### ANSIBLE VERSION ``` ansible --version [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible 2.11.0.dev0 config file = None configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/ansible-builder_test/bin/ansible python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] ``` ##### CONFIGURATION defaults ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE Run the command ``` ansible-galaxy collection install -r examples/pytz/requirements.yml -p target/ ``` Where the requirements file `examples/pytz/requirements.yml` contains the contents ```yaml collections: - name: https://github.com/AlanCoding/awx.git#awx_collection,ee_req type: git ``` ##### EXPECTED RESULTS Well, I expect it to be documented clearly one way or the other. I would _like_ for this to work. Why? Because what it installs isn't the same as what source control gave. There is a `galaxy.yml` file checked into source control in my branch. However, if I run the `ansible-galaxy collection install`, it has a `MANIFEST.json` file and no `galaxy.yml`. This suggests that there was some intermediary step, where it built and then installed the collection. Given that speculation, I would expect that the download command will give me the `tar.gz` file which can reproduce the files that the install command gave me. ##### ACTUAL RESULTS ``` $ ansible-galaxy collection download -r examples/pytz/requirements.yml -p target -vvv [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible-galaxy 2.11.0.dev0 config file = None configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/ansible-builder_test/bin/ansible-galaxy python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] No config file found; using defaults Reading requirement file at '/Users/alancoding/Documents/repos/ansible-builder/examples/pytz/requirements.yml' Process install dependency map Processing requirement collection 'https://github.com/AlanCoding/awx.git#awx_collection,ee_req' archiving ['/usr/local/bin/git', 'archive', '--prefix=awx/', '--output=/Users/alancoding/.ansible/tmp/ansible-local-72180s78cry9f/tmpxdfhelr6.tar', 'ee_req'] Skipping '/Users/alancoding/.ansible/tmp/ansible-local-72180s78cry9f/tmpj90wrb1h/awx/awx_collection/tools' for collection build Skipping '/Users/alancoding/.ansible/tmp/ansible-local-72180s78cry9f/tmpj90wrb1h/awx/awx_collection/setup.cfg' for collection build Skipping '/Users/alancoding/.ansible/tmp/ansible-local-72180s78cry9f/tmpj90wrb1h/awx/awx_collection/galaxy.yml' for collection build Starting collection download process to '/Users/alancoding/Documents/repos/ansible-builder/target' Downloading collection 'awx.awx' to '/Users/alancoding/Documents/repos/ansible-builder/target/awx-awx-0.0.1-devel.tar.gz' ERROR! Unexpected Exception, this is probably a bug: 'NoneType' object has no attribute '_add_auth_token' the full traceback was: Traceback (most recent call last): File "/Users/alancoding/Documents/repos/ansible/bin/ansible-galaxy", line 125, in <module> exit_code = cli.run() File "/Users/alancoding/Documents/repos/ansible/lib/ansible/cli/galaxy.py", line 498, in run context.CLIARGS['func']() File "/Users/alancoding/Documents/repos/ansible/lib/ansible/cli/galaxy.py", line 789, in execute_download context.CLIARGS['allow_pre_release']) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/galaxy/collection.py", line 607, in download_collections b_temp_download_path = requirement.download(b_temp_path) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/galaxy/collection.py", line 226, in download self.api._add_auth_token(headers, download_url, required=False) AttributeError: 'NoneType' object has no attribute '_add_auth_token' ```
https://github.com/ansible/ansible/issues/70429
https://github.com/ansible/ansible/pull/71005
4bd7580dd7f41ca484fde21dda94d2dd856108b0
f6b3b4b430619fbd4af0f2bf50b4715ca959a7ff
2020-07-02T02:08:19Z
python
2020-08-04T15:10:00Z
changelogs/fragments/galaxy-download-scm.yaml
closed
ansible/ansible
https://github.com/ansible/ansible
70,429
Download of SCM collections gives traceback
##### SUMMARY Compatibility of `ansible-galaxy collection download` with collections from source control is unclear, but attempting it gives a traceback. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lib/ansible/cli/galaxy.py ##### ANSIBLE VERSION ``` ansible --version [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible 2.11.0.dev0 config file = None configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/ansible-builder_test/bin/ansible python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] ``` ##### CONFIGURATION defaults ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE Run the command ``` ansible-galaxy collection install -r examples/pytz/requirements.yml -p target/ ``` Where the requirements file `examples/pytz/requirements.yml` contains the contents ```yaml collections: - name: https://github.com/AlanCoding/awx.git#awx_collection,ee_req type: git ``` ##### EXPECTED RESULTS Well, I expect it to be documented clearly one way or the other. I would _like_ for this to work. Why? Because what it installs isn't the same as what source control gave. There is a `galaxy.yml` file checked into source control in my branch. However, if I run the `ansible-galaxy collection install`, it has a `MANIFEST.json` file and no `galaxy.yml`. This suggests that there was some intermediary step, where it built and then installed the collection. Given that speculation, I would expect that the download command will give me the `tar.gz` file which can reproduce the files that the install command gave me. ##### ACTUAL RESULTS ``` $ ansible-galaxy collection download -r examples/pytz/requirements.yml -p target -vvv [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible-galaxy 2.11.0.dev0 config file = None configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/ansible-builder_test/bin/ansible-galaxy python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] No config file found; using defaults Reading requirement file at '/Users/alancoding/Documents/repos/ansible-builder/examples/pytz/requirements.yml' Process install dependency map Processing requirement collection 'https://github.com/AlanCoding/awx.git#awx_collection,ee_req' archiving ['/usr/local/bin/git', 'archive', '--prefix=awx/', '--output=/Users/alancoding/.ansible/tmp/ansible-local-72180s78cry9f/tmpxdfhelr6.tar', 'ee_req'] Skipping '/Users/alancoding/.ansible/tmp/ansible-local-72180s78cry9f/tmpj90wrb1h/awx/awx_collection/tools' for collection build Skipping '/Users/alancoding/.ansible/tmp/ansible-local-72180s78cry9f/tmpj90wrb1h/awx/awx_collection/setup.cfg' for collection build Skipping '/Users/alancoding/.ansible/tmp/ansible-local-72180s78cry9f/tmpj90wrb1h/awx/awx_collection/galaxy.yml' for collection build Starting collection download process to '/Users/alancoding/Documents/repos/ansible-builder/target' Downloading collection 'awx.awx' to '/Users/alancoding/Documents/repos/ansible-builder/target/awx-awx-0.0.1-devel.tar.gz' ERROR! Unexpected Exception, this is probably a bug: 'NoneType' object has no attribute '_add_auth_token' the full traceback was: Traceback (most recent call last): File "/Users/alancoding/Documents/repos/ansible/bin/ansible-galaxy", line 125, in <module> exit_code = cli.run() File "/Users/alancoding/Documents/repos/ansible/lib/ansible/cli/galaxy.py", line 498, in run context.CLIARGS['func']() File "/Users/alancoding/Documents/repos/ansible/lib/ansible/cli/galaxy.py", line 789, in execute_download context.CLIARGS['allow_pre_release']) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/galaxy/collection.py", line 607, in download_collections b_temp_download_path = requirement.download(b_temp_path) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/galaxy/collection.py", line 226, in download self.api._add_auth_token(headers, download_url, required=False) AttributeError: 'NoneType' object has no attribute '_add_auth_token' ```
https://github.com/ansible/ansible/issues/70429
https://github.com/ansible/ansible/pull/71005
4bd7580dd7f41ca484fde21dda94d2dd856108b0
f6b3b4b430619fbd4af0f2bf50b4715ca959a7ff
2020-07-02T02:08:19Z
python
2020-08-04T15:10:00Z
lib/ansible/galaxy/collection.py
# Copyright: (c) 2019, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import errno import fnmatch import json import operator import os import shutil import stat import sys import tarfile import tempfile import threading import time import yaml from collections import namedtuple from contextlib import contextmanager from distutils.version import LooseVersion from hashlib import sha256 from io import BytesIO from yaml.error import YAMLError try: import queue except ImportError: import Queue as queue # Python 2 import ansible.constants as C from ansible.errors import AnsibleError from ansible.galaxy import get_collections_galaxy_meta_info from ansible.galaxy.api import CollectionVersionMetadata, GalaxyError from ansible.galaxy.user_agent import user_agent from ansible.module_utils import six from ansible.module_utils._text import to_bytes, to_native, to_text from ansible.utils.collection_loader import AnsibleCollectionRef from ansible.utils.display import Display from ansible.utils.galaxy import scm_archive_collection from ansible.utils.hashing import secure_hash, secure_hash_s from ansible.utils.version import SemanticVersion from ansible.module_utils.urls import open_url urlparse = six.moves.urllib.parse.urlparse urldefrag = six.moves.urllib.parse.urldefrag urllib_error = six.moves.urllib.error display = Display() MANIFEST_FORMAT = 1 ModifiedContent = namedtuple('ModifiedContent', ['filename', 'expected', 'installed']) class CollectionRequirement: _FILE_MAPPING = [(b'MANIFEST.json', 'manifest_file'), (b'FILES.json', 'files_file')] def __init__(self, namespace, name, b_path, api, versions, requirement, force, parent=None, metadata=None, files=None, skip=False, allow_pre_releases=False): """Represents a collection requirement, the versions that are available to be installed as well as any dependencies the collection has. :param namespace: The collection namespace. :param name: The collection name. :param b_path: Byte str of the path to the collection tarball if it has already been downloaded. :param api: The GalaxyAPI to use if the collection is from Galaxy. :param versions: A list of versions of the collection that are available. :param requirement: The version requirement string used to verify the list of versions fit the requirements. :param force: Whether the force flag applied to the collection. :param parent: The name of the parent the collection is a dependency of. :param metadata: The galaxy.api.CollectionVersionMetadata that has already been retrieved from the Galaxy server. :param files: The files that exist inside the collection. This is based on the FILES.json file inside the collection artifact. :param skip: Whether to skip installing the collection. Should be set if the collection is already installed and force is not set. :param allow_pre_releases: Whether to skip pre-release versions of collections. """ self.namespace = namespace self.name = name self.b_path = b_path self.api = api self._versions = set(versions) self.force = force self.skip = skip self.required_by = [] self.allow_pre_releases = allow_pre_releases self._metadata = metadata self._files = files self.add_requirement(parent, requirement) def __str__(self): return to_native("%s.%s" % (self.namespace, self.name)) def __unicode__(self): return u"%s.%s" % (self.namespace, self.name) @property def metadata(self): self._get_metadata() return self._metadata @property def versions(self): if self.allow_pre_releases: return self._versions return set(v for v in self._versions if v == '*' or not SemanticVersion(v).is_prerelease) @versions.setter def versions(self, value): self._versions = set(value) @property def pre_releases(self): return set(v for v in self._versions if SemanticVersion(v).is_prerelease) @property def latest_version(self): try: return max([v for v in self.versions if v != '*'], key=SemanticVersion) except ValueError: # ValueError: max() arg is an empty sequence return '*' @property def dependencies(self): if not self._metadata: if len(self.versions) > 1: return {} self._get_metadata() dependencies = self._metadata.dependencies if dependencies is None: return {} return dependencies @staticmethod def artifact_info(b_path): """Load the manifest data from the MANIFEST.json and FILES.json. If the files exist, return a dict containing the keys 'files_file' and 'manifest_file'. :param b_path: The directory of a collection. """ info = {} for b_file_name, property_name in CollectionRequirement._FILE_MAPPING: b_file_path = os.path.join(b_path, b_file_name) if not os.path.exists(b_file_path): continue with open(b_file_path, 'rb') as file_obj: try: info[property_name] = json.loads(to_text(file_obj.read(), errors='surrogate_or_strict')) except ValueError: raise AnsibleError("Collection file at '%s' does not contain a valid json string." % to_native(b_file_path)) return info @staticmethod def galaxy_metadata(b_path): """Generate the manifest data from the galaxy.yml file. If the galaxy.yml exists, return a dictionary containing the keys 'files_file' and 'manifest_file'. :param b_path: The directory of a collection. """ b_galaxy_path = get_galaxy_metadata_path(b_path) info = {} if os.path.exists(b_galaxy_path): collection_meta = _get_galaxy_yml(b_galaxy_path) info['files_file'] = _build_files_manifest(b_path, collection_meta['namespace'], collection_meta['name'], collection_meta['build_ignore']) info['manifest_file'] = _build_manifest(**collection_meta) return info @staticmethod def collection_info(b_path, fallback_metadata=False): info = CollectionRequirement.artifact_info(b_path) if info or not fallback_metadata: return info return CollectionRequirement.galaxy_metadata(b_path) def add_requirement(self, parent, requirement): self.required_by.append((parent, requirement)) new_versions = set(v for v in self.versions if self._meets_requirements(v, requirement, parent)) if len(new_versions) == 0: if self.skip: force_flag = '--force-with-deps' if parent else '--force' version = self.latest_version if self.latest_version != '*' else 'unknown' msg = "Cannot meet requirement %s:%s as it is already installed at version '%s'. Use %s to overwrite" \ % (to_text(self), requirement, version, force_flag) raise AnsibleError(msg) elif parent is None: msg = "Cannot meet requirement %s for dependency %s" % (requirement, to_text(self)) else: msg = "Cannot meet dependency requirement '%s:%s' for collection %s" \ % (to_text(self), requirement, parent) collection_source = to_text(self.b_path, nonstring='passthru') or self.api.api_server req_by = "\n".join( "\t%s - '%s:%s'" % (to_text(p) if p else 'base', to_text(self), r) for p, r in self.required_by ) versions = ", ".join(sorted(self.versions, key=SemanticVersion)) if not self.versions and self.pre_releases: pre_release_msg = ( '\nThis collection only contains pre-releases. Utilize `--pre` to install pre-releases, or ' 'explicitly provide the pre-release version.' ) else: pre_release_msg = '' raise AnsibleError( "%s from source '%s'. Available versions before last requirement added: %s\nRequirements from:\n%s%s" % (msg, collection_source, versions, req_by, pre_release_msg) ) self.versions = new_versions def download(self, b_path): download_url = self._metadata.download_url artifact_hash = self._metadata.artifact_sha256 headers = {} self.api._add_auth_token(headers, download_url, required=False) b_collection_path = _download_file(download_url, b_path, artifact_hash, self.api.validate_certs, headers=headers) return to_text(b_collection_path, errors='surrogate_or_strict') def install(self, path, b_temp_path): if self.skip: display.display("Skipping '%s' as it is already installed" % to_text(self)) return # Install if it is not collection_path = os.path.join(path, self.namespace, self.name) b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict') display.display("Installing '%s:%s' to '%s'" % (to_text(self), self.latest_version, collection_path)) if self.b_path is None: self.b_path = self.download(b_temp_path) if os.path.exists(b_collection_path): shutil.rmtree(b_collection_path) if os.path.isfile(self.b_path): self.install_artifact(b_collection_path, b_temp_path) else: self.install_scm(b_collection_path) display.display("%s (%s) was installed successfully" % (to_text(self), self.latest_version)) def install_artifact(self, b_collection_path, b_temp_path): try: with tarfile.open(self.b_path, mode='r') as collection_tar: files_member_obj = collection_tar.getmember('FILES.json') with _tarfile_extract(collection_tar, files_member_obj) as (dummy, files_obj): files = json.loads(to_text(files_obj.read(), errors='surrogate_or_strict')) _extract_tar_file(collection_tar, 'MANIFEST.json', b_collection_path, b_temp_path) _extract_tar_file(collection_tar, 'FILES.json', b_collection_path, b_temp_path) for file_info in files['files']: file_name = file_info['name'] if file_name == '.': continue if file_info['ftype'] == 'file': _extract_tar_file(collection_tar, file_name, b_collection_path, b_temp_path, expected_hash=file_info['chksum_sha256']) else: _extract_tar_dir(collection_tar, file_name, b_collection_path) except Exception: # Ensure we don't leave the dir behind in case of a failure. shutil.rmtree(b_collection_path) b_namespace_path = os.path.dirname(b_collection_path) if not os.listdir(b_namespace_path): os.rmdir(b_namespace_path) raise def install_scm(self, b_collection_output_path): """Install the collection from source control into given dir. Generates the Ansible collection artifact data from a galaxy.yml and installs the artifact to a directory. This should follow the same pattern as build_collection, but instead of creating an artifact, install it. :param b_collection_output_path: The installation directory for the collection artifact. :raises AnsibleError: If no collection metadata found. """ b_collection_path = self.b_path b_galaxy_path = get_galaxy_metadata_path(b_collection_path) if not os.path.exists(b_galaxy_path): raise AnsibleError("The collection galaxy.yml path '%s' does not exist." % to_native(b_galaxy_path)) info = CollectionRequirement.galaxy_metadata(b_collection_path) collection_manifest = info['manifest_file'] collection_meta = collection_manifest['collection_info'] file_manifest = info['files_file'] _build_collection_dir(b_collection_path, b_collection_output_path, collection_manifest, file_manifest) collection_name = "%s.%s" % (collection_manifest['collection_info']['namespace'], collection_manifest['collection_info']['name']) display.display('Created collection for %s at %s' % (collection_name, to_text(b_collection_output_path))) def set_latest_version(self): self.versions = set([self.latest_version]) self._get_metadata() def verify(self, remote_collection, path, b_temp_tar_path): if not self.skip: display.display("'%s' has not been installed, nothing to verify" % (to_text(self))) return collection_path = os.path.join(path, self.namespace, self.name) b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict') display.vvv("Verifying '%s:%s'." % (to_text(self), self.latest_version)) display.vvv("Installed collection found at '%s'" % collection_path) display.vvv("Remote collection found at '%s'" % remote_collection.metadata.download_url) # Compare installed version versus requirement version if self.latest_version != remote_collection.latest_version: err = "%s has the version '%s' but is being compared to '%s'" % (to_text(self), self.latest_version, remote_collection.latest_version) display.display(err) return modified_content = [] # Verify the manifest hash matches before verifying the file manifest expected_hash = _get_tar_file_hash(b_temp_tar_path, 'MANIFEST.json') self._verify_file_hash(b_collection_path, 'MANIFEST.json', expected_hash, modified_content) manifest = _get_json_from_tar_file(b_temp_tar_path, 'MANIFEST.json') # Use the manifest to verify the file manifest checksum file_manifest_data = manifest['file_manifest_file'] file_manifest_filename = file_manifest_data['name'] expected_hash = file_manifest_data['chksum_%s' % file_manifest_data['chksum_type']] # Verify the file manifest before using it to verify individual files self._verify_file_hash(b_collection_path, file_manifest_filename, expected_hash, modified_content) file_manifest = _get_json_from_tar_file(b_temp_tar_path, file_manifest_filename) # Use the file manifest to verify individual file checksums for manifest_data in file_manifest['files']: if manifest_data['ftype'] == 'file': expected_hash = manifest_data['chksum_%s' % manifest_data['chksum_type']] self._verify_file_hash(b_collection_path, manifest_data['name'], expected_hash, modified_content) if modified_content: display.display("Collection %s contains modified content in the following files:" % to_text(self)) display.display(to_text(self)) display.vvv(to_text(self.b_path)) for content_change in modified_content: display.display(' %s' % content_change.filename) display.vvv(" Expected: %s\n Found: %s" % (content_change.expected, content_change.installed)) else: display.vvv("Successfully verified that checksums for '%s:%s' match the remote collection" % (to_text(self), self.latest_version)) def _verify_file_hash(self, b_path, filename, expected_hash, error_queue): b_file_path = to_bytes(os.path.join(to_text(b_path), filename), errors='surrogate_or_strict') if not os.path.isfile(b_file_path): actual_hash = None else: with open(b_file_path, mode='rb') as file_object: actual_hash = _consume_file(file_object) if expected_hash != actual_hash: error_queue.append(ModifiedContent(filename=filename, expected=expected_hash, installed=actual_hash)) def _get_metadata(self): if self._metadata: return self._metadata = self.api.get_collection_version_metadata(self.namespace, self.name, self.latest_version) def _meets_requirements(self, version, requirements, parent): """ Supports version identifiers can be '==', '!=', '>', '>=', '<', '<=', '*'. Each requirement is delimited by ',' """ op_map = { '!=': operator.ne, '==': operator.eq, '=': operator.eq, '>=': operator.ge, '>': operator.gt, '<=': operator.le, '<': operator.lt, } for req in list(requirements.split(',')): op_pos = 2 if len(req) > 1 and req[1] == '=' else 1 op = op_map.get(req[:op_pos]) requirement = req[op_pos:] if not op: requirement = req op = operator.eq # In the case we are checking a new requirement on a base requirement (parent != None) we can't accept # version as '*' (unknown version) unless the requirement is also '*'. if parent and version == '*' and requirement != '*': display.warning("Failed to validate the collection requirement '%s:%s' for %s when the existing " "install does not have a version set, the collection may not work." % (to_text(self), req, parent)) continue elif requirement == '*' or version == '*': continue if not op(SemanticVersion(version), SemanticVersion.from_loose_version(LooseVersion(requirement))): break else: return True # The loop was broken early, it does not meet all the requirements return False @staticmethod def from_tar(b_path, force, parent=None): if not tarfile.is_tarfile(b_path): raise AnsibleError("Collection artifact at '%s' is not a valid tar file." % to_native(b_path)) info = {} with tarfile.open(b_path, mode='r') as collection_tar: for b_member_name, property_name in CollectionRequirement._FILE_MAPPING: n_member_name = to_native(b_member_name) try: member = collection_tar.getmember(n_member_name) except KeyError: raise AnsibleError("Collection at '%s' does not contain the required file %s." % (to_native(b_path), n_member_name)) with _tarfile_extract(collection_tar, member) as (dummy, member_obj): try: info[property_name] = json.loads(to_text(member_obj.read(), errors='surrogate_or_strict')) except ValueError: raise AnsibleError("Collection tar file member %s does not contain a valid json string." % n_member_name) meta = info['manifest_file']['collection_info'] files = info['files_file']['files'] namespace = meta['namespace'] name = meta['name'] version = meta['version'] meta = CollectionVersionMetadata(namespace, name, version, None, None, meta['dependencies']) if SemanticVersion(version).is_prerelease: allow_pre_release = True else: allow_pre_release = False return CollectionRequirement(namespace, name, b_path, None, [version], version, force, parent=parent, metadata=meta, files=files, allow_pre_releases=allow_pre_release) @staticmethod def from_path(b_path, force, parent=None, fallback_metadata=False, skip=True): info = CollectionRequirement.collection_info(b_path, fallback_metadata) allow_pre_release = False if 'manifest_file' in info: manifest = info['manifest_file']['collection_info'] namespace = manifest['namespace'] name = manifest['name'] version = to_text(manifest['version'], errors='surrogate_or_strict') try: _v = SemanticVersion() _v.parse(version) if _v.is_prerelease: allow_pre_release = True except ValueError: display.warning("Collection at '%s' does not have a valid version set, falling back to '*'. Found " "version: '%s'" % (to_text(b_path), version)) version = '*' dependencies = manifest['dependencies'] else: if fallback_metadata: warning = "Collection at '%s' does not have a galaxy.yml or a MANIFEST.json file, cannot detect version." else: warning = "Collection at '%s' does not have a MANIFEST.json file, cannot detect version." display.warning(warning % to_text(b_path)) parent_dir, name = os.path.split(to_text(b_path, errors='surrogate_or_strict')) namespace = os.path.split(parent_dir)[1] version = '*' dependencies = {} meta = CollectionVersionMetadata(namespace, name, version, None, None, dependencies) files = info.get('files_file', {}).get('files', {}) return CollectionRequirement(namespace, name, b_path, None, [version], version, force, parent=parent, metadata=meta, files=files, skip=skip, allow_pre_releases=allow_pre_release) @staticmethod def from_name(collection, apis, requirement, force, parent=None, allow_pre_release=False): namespace, name = collection.split('.', 1) galaxy_meta = None for api in apis: try: if not (requirement == '*' or requirement.startswith('<') or requirement.startswith('>') or requirement.startswith('!=')): # Exact requirement allow_pre_release = True if requirement.startswith('='): requirement = requirement.lstrip('=') resp = api.get_collection_version_metadata(namespace, name, requirement) galaxy_meta = resp versions = [resp.version] else: versions = api.get_collection_versions(namespace, name) except GalaxyError as err: if err.http_code != 404: raise versions = [] # Automation Hub doesn't return a 404 but an empty version list so we check that to align both AH and # Galaxy when the collection is not available on that server. if not versions: display.vvv("Collection '%s' is not available from server %s %s" % (collection, api.name, api.api_server)) continue display.vvv("Collection '%s' obtained from server %s %s" % (collection, api.name, api.api_server)) break else: raise AnsibleError("Failed to find collection %s:%s" % (collection, requirement)) req = CollectionRequirement(namespace, name, None, api, versions, requirement, force, parent=parent, metadata=galaxy_meta, allow_pre_releases=allow_pre_release) return req def build_collection(collection_path, output_path, force): """Creates the Ansible collection artifact in a .tar.gz file. :param collection_path: The path to the collection to build. This should be the directory that contains the galaxy.yml file. :param output_path: The path to create the collection build artifact. This should be a directory. :param force: Whether to overwrite an existing collection build artifact or fail. :return: The path to the collection build artifact. """ b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict') b_galaxy_path = get_galaxy_metadata_path(b_collection_path) if not os.path.exists(b_galaxy_path): raise AnsibleError("The collection galaxy.yml path '%s' does not exist." % to_native(b_galaxy_path)) info = CollectionRequirement.galaxy_metadata(b_collection_path) collection_manifest = info['manifest_file'] collection_meta = collection_manifest['collection_info'] file_manifest = info['files_file'] collection_output = os.path.join(output_path, "%s-%s-%s.tar.gz" % (collection_meta['namespace'], collection_meta['name'], collection_meta['version'])) b_collection_output = to_bytes(collection_output, errors='surrogate_or_strict') if os.path.exists(b_collection_output): if os.path.isdir(b_collection_output): raise AnsibleError("The output collection artifact '%s' already exists, " "but is a directory - aborting" % to_native(collection_output)) elif not force: raise AnsibleError("The file '%s' already exists. You can use --force to re-create " "the collection artifact." % to_native(collection_output)) _build_collection_tar(b_collection_path, b_collection_output, collection_manifest, file_manifest) return collection_output def download_collections(collections, output_path, apis, validate_certs, no_deps, allow_pre_release): """Download Ansible collections as their tarball from a Galaxy server to the path specified and creates a requirements file of the downloaded requirements to be used for an install. :param collections: The collections to download, should be a list of tuples with (name, requirement, Galaxy Server). :param output_path: The path to download the collections to. :param apis: A list of GalaxyAPIs to query when search for a collection. :param validate_certs: Whether to validate the certificate if downloading a tarball from a non-Galaxy host. :param no_deps: Ignore any collection dependencies and only download the base requirements. :param allow_pre_release: Do not ignore pre-release versions when selecting the latest. """ with _tempdir() as b_temp_path: display.display("Process install dependency map") with _display_progress(): dep_map = _build_dependency_map(collections, [], b_temp_path, apis, validate_certs, True, True, no_deps, allow_pre_release=allow_pre_release) requirements = [] display.display("Starting collection download process to '%s'" % output_path) with _display_progress(): for name, requirement in dep_map.items(): collection_filename = "%s-%s-%s.tar.gz" % (requirement.namespace, requirement.name, requirement.latest_version) dest_path = os.path.join(output_path, collection_filename) requirements.append({'name': collection_filename, 'version': requirement.latest_version}) display.display("Downloading collection '%s' to '%s'" % (name, dest_path)) if requirement.api is None and requirement.b_path and os.path.isfile(requirement.b_path): shutil.copy(requirement.b_path, to_bytes(dest_path, errors='surrogate_or_strict')) elif requirement.api is None and requirement.b_path: temp_path = to_text(b_temp_path, errors='surrogate_or_string') scm_build_path = os.path.join(temp_path, 'tmp_build-%s' % collection_filename) os.makedirs(to_bytes(scm_build_path, errors='surrogate_or_strict'), mode=0o0755) temp_download_path = build_collection(os.path.join(temp_path, name), scm_build_path, True) shutil.move(to_bytes(temp_download_path, errors='surrogate_or_strict'), to_bytes(dest_path, errors='surrogate_or_strict')) else: b_temp_download_path = requirement.download(b_temp_path) shutil.move(b_temp_download_path, to_bytes(dest_path, errors='surrogate_or_strict')) display.display("%s (%s) was downloaded successfully" % (name, requirement.latest_version)) requirements_path = os.path.join(output_path, 'requirements.yml') display.display("Writing requirements.yml file of downloaded collections to '%s'" % requirements_path) with open(to_bytes(requirements_path, errors='surrogate_or_strict'), mode='wb') as req_fd: req_fd.write(to_bytes(yaml.safe_dump({'collections': requirements}), errors='surrogate_or_strict')) def publish_collection(collection_path, api, wait, timeout): """Publish an Ansible collection tarball into an Ansible Galaxy server. :param collection_path: The path to the collection tarball to publish. :param api: A GalaxyAPI to publish the collection to. :param wait: Whether to wait until the import process is complete. :param timeout: The time in seconds to wait for the import process to finish, 0 is indefinite. """ import_uri = api.publish_collection(collection_path) if wait: # Galaxy returns a url fragment which differs between v2 and v3. The second to last entry is # always the task_id, though. # v2: {"task": "https://galaxy-dev.ansible.com/api/v2/collection-imports/35573/"} # v3: {"task": "/api/automation-hub/v3/imports/collections/838d1308-a8f4-402c-95cb-7823f3806cd8/"} task_id = None for path_segment in reversed(import_uri.split('/')): if path_segment: task_id = path_segment break if not task_id: raise AnsibleError("Publishing the collection did not return valid task info. Cannot wait for task status. Returned task info: '%s'" % import_uri) display.display("Collection has been published to the Galaxy server %s %s" % (api.name, api.api_server)) with _display_progress(): api.wait_import_task(task_id, timeout) display.display("Collection has been successfully published and imported to the Galaxy server %s %s" % (api.name, api.api_server)) else: display.display("Collection has been pushed to the Galaxy server %s %s, not waiting until import has " "completed due to --no-wait being set. Import task results can be found at %s" % (api.name, api.api_server, import_uri)) def install_collections(collections, output_path, apis, validate_certs, ignore_errors, no_deps, force, force_deps, allow_pre_release=False): """Install Ansible collections to the path specified. :param collections: The collections to install, should be a list of tuples with (name, requirement, Galaxy server). :param output_path: The path to install the collections to. :param apis: A list of GalaxyAPIs to query when searching for a collection. :param validate_certs: Whether to validate the certificates if downloading a tarball. :param ignore_errors: Whether to ignore any errors when installing the collection. :param no_deps: Ignore any collection dependencies and only install the base requirements. :param force: Re-install a collection if it has already been installed. :param force_deps: Re-install a collection as well as its dependencies if they have already been installed. """ existing_collections = find_existing_collections(output_path, fallback_metadata=True) with _tempdir() as b_temp_path: display.display("Process install dependency map") with _display_progress(): dependency_map = _build_dependency_map(collections, existing_collections, b_temp_path, apis, validate_certs, force, force_deps, no_deps, allow_pre_release=allow_pre_release) display.display("Starting collection install process") with _display_progress(): for collection in dependency_map.values(): try: collection.install(output_path, b_temp_path) except AnsibleError as err: if ignore_errors: display.warning("Failed to install collection %s but skipping due to --ignore-errors being set. " "Error: %s" % (to_text(collection), to_text(err))) else: raise def validate_collection_name(name): """Validates the collection name as an input from the user or a requirements file fit the requirements. :param name: The input name with optional range specifier split by ':'. :return: The input value, required for argparse validation. """ collection, dummy, dummy = name.partition(':') if AnsibleCollectionRef.is_valid_collection_name(collection): return name raise AnsibleError("Invalid collection name '%s', " "name must be in the format <namespace>.<collection>. \n" "Please make sure namespace and collection name contains " "characters from [a-zA-Z0-9_] only." % name) def validate_collection_path(collection_path): """Ensure a given path ends with 'ansible_collections' :param collection_path: The path that should end in 'ansible_collections' :return: collection_path ending in 'ansible_collections' if it does not already. """ if os.path.split(collection_path)[1] != 'ansible_collections': return os.path.join(collection_path, 'ansible_collections') return collection_path def verify_collections(collections, search_paths, apis, validate_certs, ignore_errors, allow_pre_release=False): with _display_progress(): with _tempdir() as b_temp_path: for collection in collections: try: local_collection = None b_collection = to_bytes(collection[0], errors='surrogate_or_strict') if os.path.isfile(b_collection) or urlparse(collection[0]).scheme.lower() in ['http', 'https'] or len(collection[0].split('.')) != 2: raise AnsibleError(message="'%s' is not a valid collection name. The format namespace.name is expected." % collection[0]) collection_name = collection[0] namespace, name = collection_name.split('.') collection_version = collection[1] # Verify local collection exists before downloading it from a galaxy server for search_path in search_paths: b_search_path = to_bytes(os.path.join(search_path, namespace, name), errors='surrogate_or_strict') if os.path.isdir(b_search_path): if not os.path.isfile(os.path.join(to_text(b_search_path, errors='surrogate_or_strict'), 'MANIFEST.json')): raise AnsibleError( message="Collection %s does not appear to have a MANIFEST.json. " % collection_name + "A MANIFEST.json is expected if the collection has been built and installed via ansible-galaxy." ) local_collection = CollectionRequirement.from_path(b_search_path, False) break if local_collection is None: raise AnsibleError(message='Collection %s is not installed in any of the collection paths.' % collection_name) # Download collection on a galaxy server for comparison try: remote_collection = CollectionRequirement.from_name(collection_name, apis, collection_version, False, parent=None, allow_pre_release=allow_pre_release) except AnsibleError as e: if e.message == 'Failed to find collection %s:%s' % (collection[0], collection[1]): raise AnsibleError('Failed to find remote collection %s:%s on any of the galaxy servers' % (collection[0], collection[1])) raise download_url = remote_collection.metadata.download_url headers = {} remote_collection.api._add_auth_token(headers, download_url, required=False) b_temp_tar_path = _download_file(download_url, b_temp_path, None, validate_certs, headers=headers) local_collection.verify(remote_collection, search_path, b_temp_tar_path) except AnsibleError as err: if ignore_errors: display.warning("Failed to verify collection %s but skipping due to --ignore-errors being set. " "Error: %s" % (collection[0], to_text(err))) else: raise @contextmanager def _tempdir(): b_temp_path = tempfile.mkdtemp(dir=to_bytes(C.DEFAULT_LOCAL_TMP, errors='surrogate_or_strict')) yield b_temp_path shutil.rmtree(b_temp_path) @contextmanager def _tarfile_extract(tar, member): tar_obj = tar.extractfile(member) yield member, tar_obj tar_obj.close() @contextmanager def _display_progress(): config_display = C.GALAXY_DISPLAY_PROGRESS display_wheel = sys.stdout.isatty() if config_display is None else config_display if not display_wheel: yield return def progress(display_queue, actual_display): actual_display.debug("Starting display_progress display thread") t = threading.current_thread() while True: for c in "|/-\\": actual_display.display(c + "\b", newline=False) time.sleep(0.1) # Display a message from the main thread while True: try: method, args, kwargs = display_queue.get(block=False, timeout=0.1) except queue.Empty: break else: func = getattr(actual_display, method) func(*args, **kwargs) if getattr(t, "finish", False): actual_display.debug("Received end signal for display_progress display thread") return class DisplayThread(object): def __init__(self, display_queue): self.display_queue = display_queue def __getattr__(self, attr): def call_display(*args, **kwargs): self.display_queue.put((attr, args, kwargs)) return call_display # Temporary override the global display class with our own which add the calls to a queue for the thread to call. global display old_display = display try: display_queue = queue.Queue() display = DisplayThread(display_queue) t = threading.Thread(target=progress, args=(display_queue, old_display)) t.daemon = True t.start() try: yield finally: t.finish = True t.join() except Exception: # The exception is re-raised so we can sure the thread is finished and not using the display anymore raise finally: display = old_display def _get_galaxy_yml(b_galaxy_yml_path): meta_info = get_collections_galaxy_meta_info() mandatory_keys = set() string_keys = set() list_keys = set() dict_keys = set() for info in meta_info: if info.get('required', False): mandatory_keys.add(info['key']) key_list_type = { 'str': string_keys, 'list': list_keys, 'dict': dict_keys, }[info.get('type', 'str')] key_list_type.add(info['key']) all_keys = frozenset(list(mandatory_keys) + list(string_keys) + list(list_keys) + list(dict_keys)) try: with open(b_galaxy_yml_path, 'rb') as g_yaml: galaxy_yml = yaml.safe_load(g_yaml) except YAMLError as err: raise AnsibleError("Failed to parse the galaxy.yml at '%s' with the following error:\n%s" % (to_native(b_galaxy_yml_path), to_native(err))) set_keys = set(galaxy_yml.keys()) missing_keys = mandatory_keys.difference(set_keys) if missing_keys: raise AnsibleError("The collection galaxy.yml at '%s' is missing the following mandatory keys: %s" % (to_native(b_galaxy_yml_path), ", ".join(sorted(missing_keys)))) extra_keys = set_keys.difference(all_keys) if len(extra_keys) > 0: display.warning("Found unknown keys in collection galaxy.yml at '%s': %s" % (to_text(b_galaxy_yml_path), ", ".join(extra_keys))) # Add the defaults if they have not been set for optional_string in string_keys: if optional_string not in galaxy_yml: galaxy_yml[optional_string] = None for optional_list in list_keys: list_val = galaxy_yml.get(optional_list, None) if list_val is None: galaxy_yml[optional_list] = [] elif not isinstance(list_val, list): galaxy_yml[optional_list] = [list_val] for optional_dict in dict_keys: if optional_dict not in galaxy_yml: galaxy_yml[optional_dict] = {} # license is a builtin var in Python, to avoid confusion we just rename it to license_ids galaxy_yml['license_ids'] = galaxy_yml['license'] del galaxy_yml['license'] return galaxy_yml def _build_files_manifest(b_collection_path, namespace, name, ignore_patterns): # We always ignore .pyc and .retry files as well as some well known version control directories. The ignore # patterns can be extended by the build_ignore key in galaxy.yml b_ignore_patterns = [ b'galaxy.yml', b'galaxy.yaml', b'.git', b'*.pyc', b'*.retry', b'tests/output', # Ignore ansible-test result output directory. to_bytes('{0}-{1}-*.tar.gz'.format(namespace, name)), # Ignores previously built artifacts in the root dir. ] b_ignore_patterns += [to_bytes(p) for p in ignore_patterns] b_ignore_dirs = frozenset([b'CVS', b'.bzr', b'.hg', b'.git', b'.svn', b'__pycache__', b'.tox']) entry_template = { 'name': None, 'ftype': None, 'chksum_type': None, 'chksum_sha256': None, 'format': MANIFEST_FORMAT } manifest = { 'files': [ { 'name': '.', 'ftype': 'dir', 'chksum_type': None, 'chksum_sha256': None, 'format': MANIFEST_FORMAT, }, ], 'format': MANIFEST_FORMAT, } def _walk(b_path, b_top_level_dir): for b_item in os.listdir(b_path): b_abs_path = os.path.join(b_path, b_item) b_rel_base_dir = b'' if b_path == b_top_level_dir else b_path[len(b_top_level_dir) + 1:] b_rel_path = os.path.join(b_rel_base_dir, b_item) rel_path = to_text(b_rel_path, errors='surrogate_or_strict') if os.path.isdir(b_abs_path): if any(b_item == b_path for b_path in b_ignore_dirs) or \ any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns): display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path)) continue if os.path.islink(b_abs_path): b_link_target = os.path.realpath(b_abs_path) if not _is_child_path(b_link_target, b_top_level_dir): display.warning("Skipping '%s' as it is a symbolic link to a directory outside the collection" % to_text(b_abs_path)) continue manifest_entry = entry_template.copy() manifest_entry['name'] = rel_path manifest_entry['ftype'] = 'dir' manifest['files'].append(manifest_entry) if not os.path.islink(b_abs_path): _walk(b_abs_path, b_top_level_dir) else: if any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns): display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path)) continue # Handling of file symlinks occur in _build_collection_tar, the manifest for a symlink is the same for # a normal file. manifest_entry = entry_template.copy() manifest_entry['name'] = rel_path manifest_entry['ftype'] = 'file' manifest_entry['chksum_type'] = 'sha256' manifest_entry['chksum_sha256'] = secure_hash(b_abs_path, hash_func=sha256) manifest['files'].append(manifest_entry) _walk(b_collection_path, b_collection_path) return manifest def _build_manifest(namespace, name, version, authors, readme, tags, description, license_ids, license_file, dependencies, repository, documentation, homepage, issues, **kwargs): manifest = { 'collection_info': { 'namespace': namespace, 'name': name, 'version': version, 'authors': authors, 'readme': readme, 'tags': tags, 'description': description, 'license': license_ids, 'license_file': license_file if license_file else None, # Handle galaxy.yml having an empty string (None) 'dependencies': dependencies, 'repository': repository, 'documentation': documentation, 'homepage': homepage, 'issues': issues, }, 'file_manifest_file': { 'name': 'FILES.json', 'ftype': 'file', 'chksum_type': 'sha256', 'chksum_sha256': None, # Filled out in _build_collection_tar 'format': MANIFEST_FORMAT }, 'format': MANIFEST_FORMAT, } return manifest def _build_collection_tar(b_collection_path, b_tar_path, collection_manifest, file_manifest): """Build a tar.gz collection artifact from the manifest data.""" files_manifest_json = to_bytes(json.dumps(file_manifest, indent=True), errors='surrogate_or_strict') collection_manifest['file_manifest_file']['chksum_sha256'] = secure_hash_s(files_manifest_json, hash_func=sha256) collection_manifest_json = to_bytes(json.dumps(collection_manifest, indent=True), errors='surrogate_or_strict') with _tempdir() as b_temp_path: b_tar_filepath = os.path.join(b_temp_path, os.path.basename(b_tar_path)) with tarfile.open(b_tar_filepath, mode='w:gz') as tar_file: # Add the MANIFEST.json and FILES.json file to the archive for name, b in [('MANIFEST.json', collection_manifest_json), ('FILES.json', files_manifest_json)]: b_io = BytesIO(b) tar_info = tarfile.TarInfo(name) tar_info.size = len(b) tar_info.mtime = time.time() tar_info.mode = 0o0644 tar_file.addfile(tarinfo=tar_info, fileobj=b_io) for file_info in file_manifest['files']: if file_info['name'] == '.': continue # arcname expects a native string, cannot be bytes filename = to_native(file_info['name'], errors='surrogate_or_strict') b_src_path = os.path.join(b_collection_path, to_bytes(filename, errors='surrogate_or_strict')) def reset_stat(tarinfo): if tarinfo.type != tarfile.SYMTYPE: existing_is_exec = tarinfo.mode & stat.S_IXUSR tarinfo.mode = 0o0755 if existing_is_exec or tarinfo.isdir() else 0o0644 tarinfo.uid = tarinfo.gid = 0 tarinfo.uname = tarinfo.gname = '' return tarinfo if os.path.islink(b_src_path): b_link_target = os.path.realpath(b_src_path) if _is_child_path(b_link_target, b_collection_path): b_rel_path = os.path.relpath(b_link_target, start=os.path.dirname(b_src_path)) tar_info = tarfile.TarInfo(filename) tar_info.type = tarfile.SYMTYPE tar_info.linkname = to_native(b_rel_path, errors='surrogate_or_strict') tar_info = reset_stat(tar_info) tar_file.addfile(tarinfo=tar_info) continue # Dealing with a normal file, just add it by name. tar_file.add(os.path.realpath(b_src_path), arcname=filename, recursive=False, filter=reset_stat) shutil.copy(b_tar_filepath, b_tar_path) collection_name = "%s.%s" % (collection_manifest['collection_info']['namespace'], collection_manifest['collection_info']['name']) display.display('Created collection for %s at %s' % (collection_name, to_text(b_tar_path))) def _build_collection_dir(b_collection_path, b_collection_output, collection_manifest, file_manifest): """Build a collection directory from the manifest data. This should follow the same pattern as _build_collection_tar. """ os.makedirs(b_collection_output, mode=0o0755) files_manifest_json = to_bytes(json.dumps(file_manifest, indent=True), errors='surrogate_or_strict') collection_manifest['file_manifest_file']['chksum_sha256'] = secure_hash_s(files_manifest_json, hash_func=sha256) collection_manifest_json = to_bytes(json.dumps(collection_manifest, indent=True), errors='surrogate_or_strict') # Write contents to the files for name, b in [('MANIFEST.json', collection_manifest_json), ('FILES.json', files_manifest_json)]: b_path = os.path.join(b_collection_output, to_bytes(name, errors='surrogate_or_strict')) with open(b_path, 'wb') as file_obj, BytesIO(b) as b_io: shutil.copyfileobj(b_io, file_obj) os.chmod(b_path, 0o0644) base_directories = [] for file_info in file_manifest['files']: if file_info['name'] == '.': continue src_file = os.path.join(b_collection_path, to_bytes(file_info['name'], errors='surrogate_or_strict')) dest_file = os.path.join(b_collection_output, to_bytes(file_info['name'], errors='surrogate_or_strict')) if any(src_file.startswith(directory) for directory in base_directories): continue existing_is_exec = os.stat(src_file).st_mode & stat.S_IXUSR mode = 0o0755 if existing_is_exec else 0o0644 if os.path.isdir(src_file): mode = 0o0755 base_directories.append(src_file) shutil.copytree(src_file, dest_file) else: shutil.copyfile(src_file, dest_file) os.chmod(dest_file, mode) def find_existing_collections(path, fallback_metadata=False): collections = [] b_path = to_bytes(path, errors='surrogate_or_strict') for b_namespace in os.listdir(b_path): b_namespace_path = os.path.join(b_path, b_namespace) if os.path.isfile(b_namespace_path): continue for b_collection in os.listdir(b_namespace_path): b_collection_path = os.path.join(b_namespace_path, b_collection) if os.path.isdir(b_collection_path): req = CollectionRequirement.from_path(b_collection_path, False, fallback_metadata=fallback_metadata) display.vvv("Found installed collection %s:%s at '%s'" % (to_text(req), req.latest_version, to_text(b_collection_path))) collections.append(req) return collections def _build_dependency_map(collections, existing_collections, b_temp_path, apis, validate_certs, force, force_deps, no_deps, allow_pre_release=False): dependency_map = {} # First build the dependency map on the actual requirements for name, version, source, req_type in collections: _get_collection_info(dependency_map, existing_collections, name, version, source, b_temp_path, apis, validate_certs, (force or force_deps), allow_pre_release=allow_pre_release, req_type=req_type) checked_parents = set([to_text(c) for c in dependency_map.values() if c.skip]) while len(dependency_map) != len(checked_parents): while not no_deps: # Only parse dependencies if no_deps was not set parents_to_check = set(dependency_map.keys()).difference(checked_parents) deps_exhausted = True for parent in parents_to_check: parent_info = dependency_map[parent] if parent_info.dependencies: deps_exhausted = False for dep_name, dep_requirement in parent_info.dependencies.items(): _get_collection_info(dependency_map, existing_collections, dep_name, dep_requirement, parent_info.api, b_temp_path, apis, validate_certs, force_deps, parent=parent, allow_pre_release=allow_pre_release) checked_parents.add(parent) # No extra dependencies were resolved, exit loop if deps_exhausted: break # Now we have resolved the deps to our best extent, now select the latest version for collections with # multiple versions found and go from there deps_not_checked = set(dependency_map.keys()).difference(checked_parents) for collection in deps_not_checked: dependency_map[collection].set_latest_version() if no_deps or len(dependency_map[collection].dependencies) == 0: checked_parents.add(collection) return dependency_map def _collections_from_scm(collection, requirement, b_temp_path, force, parent=None): """Returns a list of collections found in the repo. If there is a galaxy.yml in the collection then just return the specific collection. Otherwise, check each top-level directory for a galaxy.yml. :param collection: URI to a git repo :param requirement: The version of the artifact :param b_temp_path: The temporary path to the archive of a collection :param force: Whether to overwrite an existing collection or fail :param parent: The name of the parent collection :raises AnsibleError: if nothing found :return: List of CollectionRequirement objects :rtype: list """ reqs = [] name, version, path, fragment = parse_scm(collection, requirement) b_repo_root = to_bytes(name, errors='surrogate_or_strict') b_collection_path = os.path.join(b_temp_path, b_repo_root) if fragment: b_fragment = to_bytes(fragment, errors='surrogate_or_strict') b_collection_path = os.path.join(b_collection_path, b_fragment) b_galaxy_path = get_galaxy_metadata_path(b_collection_path) err = ("%s appears to be an SCM collection source, but the required galaxy.yml was not found. " "Append #path/to/collection/ to your URI (before the comma separated version, if one is specified) " "to point to a directory containing the galaxy.yml or directories of collections" % collection) display.vvvvv("Considering %s as a possible path to a collection's galaxy.yml" % b_galaxy_path) if os.path.exists(b_galaxy_path): return [CollectionRequirement.from_path(b_collection_path, force, parent, fallback_metadata=True, skip=False)] if not os.path.isdir(b_collection_path) or not os.listdir(b_collection_path): raise AnsibleError(err) for b_possible_collection in os.listdir(b_collection_path): b_collection = os.path.join(b_collection_path, b_possible_collection) if not os.path.isdir(b_collection): continue b_galaxy = get_galaxy_metadata_path(b_collection) display.vvvvv("Considering %s as a possible path to a collection's galaxy.yml" % b_galaxy) if os.path.exists(b_galaxy): reqs.append(CollectionRequirement.from_path(b_collection, force, parent, fallback_metadata=True, skip=False)) if not reqs: raise AnsibleError(err) return reqs def _get_collection_info(dep_map, existing_collections, collection, requirement, source, b_temp_path, apis, validate_certs, force, parent=None, allow_pre_release=False, req_type=None): dep_msg = "" if parent: dep_msg = " - as dependency of %s" % parent display.vvv("Processing requirement collection '%s'%s" % (to_text(collection), dep_msg)) b_tar_path = None is_file = ( req_type == 'file' or (not req_type and os.path.isfile(to_bytes(collection, errors='surrogate_or_strict'))) ) is_url = ( req_type == 'url' or (not req_type and urlparse(collection).scheme.lower() in ['http', 'https']) ) is_scm = ( req_type == 'git' or (not req_type and not b_tar_path and collection.startswith(('git+', 'git@'))) ) if is_file: display.vvvv("Collection requirement '%s' is a tar artifact" % to_text(collection)) b_tar_path = to_bytes(collection, errors='surrogate_or_strict') elif is_url: display.vvvv("Collection requirement '%s' is a URL to a tar artifact" % collection) try: b_tar_path = _download_file(collection, b_temp_path, None, validate_certs) except urllib_error.URLError as err: raise AnsibleError("Failed to download collection tar from '%s': %s" % (to_native(collection), to_native(err))) if is_scm: if not collection.startswith('git'): collection = 'git+' + collection name, version, path, fragment = parse_scm(collection, requirement) b_tar_path = scm_archive_collection(path, name=name, version=version) with tarfile.open(b_tar_path, mode='r') as collection_tar: collection_tar.extractall(path=to_text(b_temp_path)) # Ignore requirement if it is set (it must follow semantic versioning, unlike a git version, which is any tree-ish) # If the requirement was the only place version was set, requirement == version at this point if requirement not in {"*", ""} and requirement != version: display.warning( "The collection {0} appears to be a git repository and two versions were provided: '{1}', and '{2}'. " "The version {2} is being disregarded.".format(collection, version, requirement) ) requirement = "*" reqs = _collections_from_scm(collection, requirement, b_temp_path, force, parent) for req in reqs: collection_info = get_collection_info_from_req(dep_map, req) update_dep_map_collection_info(dep_map, existing_collections, collection_info, parent, requirement) else: if b_tar_path: req = CollectionRequirement.from_tar(b_tar_path, force, parent=parent) collection_info = get_collection_info_from_req(dep_map, req) else: validate_collection_name(collection) display.vvvv("Collection requirement '%s' is the name of a collection" % collection) if collection in dep_map: collection_info = dep_map[collection] collection_info.add_requirement(parent, requirement) else: apis = [source] if source else apis collection_info = CollectionRequirement.from_name(collection, apis, requirement, force, parent=parent, allow_pre_release=allow_pre_release) update_dep_map_collection_info(dep_map, existing_collections, collection_info, parent, requirement) def get_collection_info_from_req(dep_map, collection): collection_name = to_text(collection) if collection_name in dep_map: collection_info = dep_map[collection_name] collection_info.add_requirement(None, collection.latest_version) else: collection_info = collection return collection_info def update_dep_map_collection_info(dep_map, existing_collections, collection_info, parent, requirement): existing = [c for c in existing_collections if to_text(c) == to_text(collection_info)] if existing and not collection_info.force: # Test that the installed collection fits the requirement existing[0].add_requirement(parent, requirement) collection_info = existing[0] dep_map[to_text(collection_info)] = collection_info def parse_scm(collection, version): if ',' in collection: collection, version = collection.split(',', 1) elif version == '*' or not version: version = 'HEAD' if collection.startswith('git+'): path = collection[4:] else: path = collection path, fragment = urldefrag(path) fragment = fragment.strip(os.path.sep) if path.endswith(os.path.sep + '.git'): name = path.split(os.path.sep)[-2] elif '://' not in path and '@' not in path: name = path else: name = path.split('/')[-1] if name.endswith('.git'): name = name[:-4] return name, version, path, fragment def _download_file(url, b_path, expected_hash, validate_certs, headers=None): urlsplit = os.path.splitext(to_text(url.rsplit('/', 1)[1])) b_file_name = to_bytes(urlsplit[0], errors='surrogate_or_strict') b_file_ext = to_bytes(urlsplit[1], errors='surrogate_or_strict') b_file_path = tempfile.NamedTemporaryFile(dir=b_path, prefix=b_file_name, suffix=b_file_ext, delete=False).name display.display("Downloading %s to %s" % (url, to_text(b_path))) # Galaxy redirs downloads to S3 which reject the request if an Authorization header is attached so don't redir that resp = open_url(to_native(url, errors='surrogate_or_strict'), validate_certs=validate_certs, headers=headers, unredirected_headers=['Authorization'], http_agent=user_agent()) with open(b_file_path, 'wb') as download_file: actual_hash = _consume_file(resp, download_file) if expected_hash: display.vvvv("Validating downloaded file hash %s with expected hash %s" % (actual_hash, expected_hash)) if expected_hash != actual_hash: raise AnsibleError("Mismatch artifact hash with downloaded file") return b_file_path def _extract_tar_dir(tar, dirname, b_dest): """ Extracts a directory from a collection tar. """ member_names = [to_native(dirname, errors='surrogate_or_strict')] # Create list of members with and without trailing separator if not member_names[-1].endswith(os.path.sep): member_names.append(member_names[-1] + os.path.sep) # Try all of the member names and stop on the first one that are able to successfully get for member in member_names: try: tar_member = tar.getmember(member) except KeyError: continue break else: # If we still can't find the member, raise a nice error. raise AnsibleError("Unable to extract '%s' from collection" % to_native(member, errors='surrogate_or_strict')) b_dir_path = os.path.join(b_dest, to_bytes(dirname, errors='surrogate_or_strict')) b_parent_path = os.path.dirname(b_dir_path) try: os.makedirs(b_parent_path, mode=0o0755) except OSError as e: if e.errno != errno.EEXIST: raise if tar_member.type == tarfile.SYMTYPE: b_link_path = to_bytes(tar_member.linkname, errors='surrogate_or_strict') if not _is_child_path(b_link_path, b_dest, link_name=b_dir_path): raise AnsibleError("Cannot extract symlink '%s' in collection: path points to location outside of " "collection '%s'" % (to_native(dirname), b_link_path)) os.symlink(b_link_path, b_dir_path) else: if not os.path.isdir(b_dir_path): os.mkdir(b_dir_path, 0o0755) def _extract_tar_file(tar, filename, b_dest, b_temp_path, expected_hash=None): """ Extracts a file from a collection tar. """ with _get_tar_file_member(tar, filename) as (tar_member, tar_obj): if tar_member.type == tarfile.SYMTYPE: actual_hash = _consume_file(tar_obj) else: with tempfile.NamedTemporaryFile(dir=b_temp_path, delete=False) as tmpfile_obj: actual_hash = _consume_file(tar_obj, tmpfile_obj) if expected_hash and actual_hash != expected_hash: raise AnsibleError("Checksum mismatch for '%s' inside collection at '%s'" % (to_native(filename, errors='surrogate_or_strict'), to_native(tar.name))) b_dest_filepath = os.path.abspath(os.path.join(b_dest, to_bytes(filename, errors='surrogate_or_strict'))) b_parent_dir = os.path.dirname(b_dest_filepath) if not _is_child_path(b_parent_dir, b_dest): raise AnsibleError("Cannot extract tar entry '%s' as it will be placed outside the collection directory" % to_native(filename, errors='surrogate_or_strict')) if not os.path.exists(b_parent_dir): # Seems like Galaxy does not validate if all file entries have a corresponding dir ftype entry. This check # makes sure we create the parent directory even if it wasn't set in the metadata. os.makedirs(b_parent_dir, mode=0o0755) if tar_member.type == tarfile.SYMTYPE: b_link_path = to_bytes(tar_member.linkname, errors='surrogate_or_strict') if not _is_child_path(b_link_path, b_dest, link_name=b_dest_filepath): raise AnsibleError("Cannot extract symlink '%s' in collection: path points to location outside of " "collection '%s'" % (to_native(filename), b_link_path)) os.symlink(b_link_path, b_dest_filepath) else: shutil.move(to_bytes(tmpfile_obj.name, errors='surrogate_or_strict'), b_dest_filepath) # Default to rw-r--r-- and only add execute if the tar file has execute. tar_member = tar.getmember(to_native(filename, errors='surrogate_or_strict')) new_mode = 0o644 if stat.S_IMODE(tar_member.mode) & stat.S_IXUSR: new_mode |= 0o0111 os.chmod(b_dest_filepath, new_mode) def _get_tar_file_member(tar, filename): n_filename = to_native(filename, errors='surrogate_or_strict') try: member = tar.getmember(n_filename) except KeyError: raise AnsibleError("Collection tar at '%s' does not contain the expected file '%s'." % ( to_native(tar.name), n_filename)) return _tarfile_extract(tar, member) def _get_json_from_tar_file(b_path, filename): file_contents = '' with tarfile.open(b_path, mode='r') as collection_tar: with _get_tar_file_member(collection_tar, filename) as (dummy, tar_obj): bufsize = 65536 data = tar_obj.read(bufsize) while data: file_contents += to_text(data) data = tar_obj.read(bufsize) return json.loads(file_contents) def _get_tar_file_hash(b_path, filename): with tarfile.open(b_path, mode='r') as collection_tar: with _get_tar_file_member(collection_tar, filename) as (dummy, tar_obj): return _consume_file(tar_obj) def _is_child_path(path, parent_path, link_name=None): """ Checks that path is a path within the parent_path specified. """ b_path = to_bytes(path, errors='surrogate_or_strict') if link_name and not os.path.isabs(b_path): # If link_name is specified, path is the source of the link and we need to resolve the absolute path. b_link_dir = os.path.dirname(to_bytes(link_name, errors='surrogate_or_strict')) b_path = os.path.abspath(os.path.join(b_link_dir, b_path)) b_parent_path = to_bytes(parent_path, errors='surrogate_or_strict') return b_path == b_parent_path or b_path.startswith(b_parent_path + to_bytes(os.path.sep)) def _consume_file(read_from, write_to=None): bufsize = 65536 sha256_digest = sha256() data = read_from.read(bufsize) while data: if write_to is not None: write_to.write(data) write_to.flush() sha256_digest.update(data) data = read_from.read(bufsize) return sha256_digest.hexdigest() def get_galaxy_metadata_path(b_path): b_default_path = os.path.join(b_path, b'galaxy.yml') candidate_names = [b'galaxy.yml', b'galaxy.yaml'] for b_name in candidate_names: b_path = os.path.join(b_path, b_name) if os.path.exists(b_path): return b_path return b_default_path
closed
ansible/ansible
https://github.com/ansible/ansible
70,429
Download of SCM collections gives traceback
##### SUMMARY Compatibility of `ansible-galaxy collection download` with collections from source control is unclear, but attempting it gives a traceback. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lib/ansible/cli/galaxy.py ##### ANSIBLE VERSION ``` ansible --version [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible 2.11.0.dev0 config file = None configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/ansible-builder_test/bin/ansible python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] ``` ##### CONFIGURATION defaults ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE Run the command ``` ansible-galaxy collection install -r examples/pytz/requirements.yml -p target/ ``` Where the requirements file `examples/pytz/requirements.yml` contains the contents ```yaml collections: - name: https://github.com/AlanCoding/awx.git#awx_collection,ee_req type: git ``` ##### EXPECTED RESULTS Well, I expect it to be documented clearly one way or the other. I would _like_ for this to work. Why? Because what it installs isn't the same as what source control gave. There is a `galaxy.yml` file checked into source control in my branch. However, if I run the `ansible-galaxy collection install`, it has a `MANIFEST.json` file and no `galaxy.yml`. This suggests that there was some intermediary step, where it built and then installed the collection. Given that speculation, I would expect that the download command will give me the `tar.gz` file which can reproduce the files that the install command gave me. ##### ACTUAL RESULTS ``` $ ansible-galaxy collection download -r examples/pytz/requirements.yml -p target -vvv [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible-galaxy 2.11.0.dev0 config file = None configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/ansible-builder_test/bin/ansible-galaxy python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] No config file found; using defaults Reading requirement file at '/Users/alancoding/Documents/repos/ansible-builder/examples/pytz/requirements.yml' Process install dependency map Processing requirement collection 'https://github.com/AlanCoding/awx.git#awx_collection,ee_req' archiving ['/usr/local/bin/git', 'archive', '--prefix=awx/', '--output=/Users/alancoding/.ansible/tmp/ansible-local-72180s78cry9f/tmpxdfhelr6.tar', 'ee_req'] Skipping '/Users/alancoding/.ansible/tmp/ansible-local-72180s78cry9f/tmpj90wrb1h/awx/awx_collection/tools' for collection build Skipping '/Users/alancoding/.ansible/tmp/ansible-local-72180s78cry9f/tmpj90wrb1h/awx/awx_collection/setup.cfg' for collection build Skipping '/Users/alancoding/.ansible/tmp/ansible-local-72180s78cry9f/tmpj90wrb1h/awx/awx_collection/galaxy.yml' for collection build Starting collection download process to '/Users/alancoding/Documents/repos/ansible-builder/target' Downloading collection 'awx.awx' to '/Users/alancoding/Documents/repos/ansible-builder/target/awx-awx-0.0.1-devel.tar.gz' ERROR! Unexpected Exception, this is probably a bug: 'NoneType' object has no attribute '_add_auth_token' the full traceback was: Traceback (most recent call last): File "/Users/alancoding/Documents/repos/ansible/bin/ansible-galaxy", line 125, in <module> exit_code = cli.run() File "/Users/alancoding/Documents/repos/ansible/lib/ansible/cli/galaxy.py", line 498, in run context.CLIARGS['func']() File "/Users/alancoding/Documents/repos/ansible/lib/ansible/cli/galaxy.py", line 789, in execute_download context.CLIARGS['allow_pre_release']) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/galaxy/collection.py", line 607, in download_collections b_temp_download_path = requirement.download(b_temp_path) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/galaxy/collection.py", line 226, in download self.api._add_auth_token(headers, download_url, required=False) AttributeError: 'NoneType' object has no attribute '_add_auth_token' ```
https://github.com/ansible/ansible/issues/70429
https://github.com/ansible/ansible/pull/71005
4bd7580dd7f41ca484fde21dda94d2dd856108b0
f6b3b4b430619fbd4af0f2bf50b4715ca959a7ff
2020-07-02T02:08:19Z
python
2020-08-04T15:10:00Z
test/integration/targets/ansible-galaxy-collection-scm/tasks/download.yml
- name: create test download dir file: path: '{{ galaxy_dir }}/download' state: directory - name: download a git repository command: 'ansible-galaxy collection download git+https://github.com/ansible-collections/amazon.aws.git,37875c5b4ba5bf3cc43e07edf29f3432fd76def5' args: chdir: '{{ galaxy_dir }}/download' register: download_collection - name: check that the file was downloaded stat: path: '{{ galaxy_dir }}/download/collections/amazon-aws-1.0.0.tar.gz' register: download_collection_actual - assert: that: - '"Downloading collection ''amazon.aws'' to" in download_collection.stdout' - download_collection_actual.stat.exists - name: test the downloaded repository can be installed command: 'ansible-galaxy collection install -r requirements.yml' args: chdir: '{{ galaxy_dir }}/download/collections/' - name: list installed collections command: 'ansible-galaxy collection list' register: installed_collections - assert: that: - "'amazon.aws' in installed_collections.stdout" - include_tasks: ./empty_installed_collections.yml when: cleanup
closed
ansible/ansible
https://github.com/ansible/ansible
70,918
Add /meta and runtime.yml docs to Developer Guide
##### SUMMARY Looking at the Developer Guide (devel branch): https://docs.ansible.com/ansible/devel/dev_guide/developing_collections.html Not seeing any mention of `runtime.yml` nor the `/meta` directory as part of the schema. ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME Unsure ##### ANSIBLE VERSION Devel - 2.10 ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### ADDITIONAL INFORMATION Developers should be using semver as well as providing all info in the `runtime.yml` file (like `requires_ansible` and more).
https://github.com/ansible/ansible/issues/70918
https://github.com/ansible/ansible/pull/71035
f6b3b4b430619fbd4af0f2bf50b4715ca959a7ff
a9eb8b04882669bd17cd82780c908863e7504886
2020-07-27T14:14:23Z
python
2020-08-04T15:25:08Z
docs/docsite/rst/dev_guide/developing_collections.rst
.. _developing_collections: ********************** Developing collections ********************** Collections are a distribution format for Ansible content. You can use collections to package and distribute playbooks, roles, modules, and plugins. You can publish and use collections through `Ansible Galaxy <https://galaxy.ansible.com>`_. * For details on how to *use* collections see :ref:`collections`. * For the current development status of Collections and FAQ see `Ansible Collections Overview and FAQ <https://github.com/ansible-collections/overview/blob/master/README.rst>`_. .. contents:: :local: :depth: 2 .. _collection_structure: Collection structure ==================== Collections follow a simple data structure. None of the directories are required unless you have specific content that belongs in one of them. A collection does require a ``galaxy.yml`` file at the root level of the collection. This file contains all of the metadata that Galaxy and other tools need in order to package, build and publish the collection:: collection/ ├── docs/ ├── galaxy.yml ├── plugins/ │ ├── modules/ │ │ └── module1.py │ ├── inventory/ │ └── .../ ├── README.md ├── roles/ │ ├── role1/ │ ├── role2/ │ └── .../ ├── playbooks/ │ ├── files/ │ ├── vars/ │ ├── templates/ │ └── tasks/ └── tests/ .. note:: * Ansible only accepts ``.md`` extensions for the :file:`README` file and any files in the :file:`/docs` folder. * See the `ansible-collections <https://github.com/ansible-collections/>`_ GitHub Org for examples of collection structure. * Not all directories are currently in use. Those are placeholders for future features. .. _galaxy_yml: galaxy.yml ---------- A collection must have a ``galaxy.yml`` file that contains the necessary information to build a collection artifact. See :ref:`collections_galaxy_meta` for details. .. _collections_doc_dir: docs directory --------------- Put general documentation for the collection here. Keep the specific documentation for plugins and modules embedded as Python docstrings. Use the ``docs`` folder to describe how to use the roles and plugins the collection provides, role requirements, and so on. Use markdown and do not add subfolders. Use ``ansible-doc`` to view documentation for plugins inside a collection: .. code-block:: bash ansible-doc -t lookup my_namespace.my_collection.lookup1 The ``ansible-doc`` command requires the fully qualified collection name (FQCN) to display specific plugin documentation. In this example, ``my_namespace`` is the Galaxy namespace and ``my_collection`` is the collection name within that namespace. .. note:: The Galaxy namespace of an Ansible collection is defined in the ``galaxy.yml`` file. It can be different from the GitHub organization or repository name. .. _collections_plugin_dir: plugins directory ------------------ Add a 'per plugin type' specific subdirectory here, including ``module_utils`` which is usable not only by modules, but by most plugins by using their FQCN. This is a way to distribute modules, lookups, filters, and so on, without having to import a role in every play. Vars plugins are unsupported in collections. Cache plugins may be used in collections for fact caching, but are not supported for inventory plugins. .. _collection_module_utils: module_utils ^^^^^^^^^^^^ When coding with ``module_utils`` in a collection, the Python ``import`` statement needs to take into account the FQCN along with the ``ansible_collections`` convention. The resulting Python import will look like ``from ansible_collections.{namespace}.{collection}.plugins.module_utils.{util} import {something}`` The following example snippets show a Python and PowerShell module using both default Ansible ``module_utils`` and those provided by a collection. In this example the namespace is ``ansible_example``, the collection is ``community``. In the Python example the ``module_util`` in question is called ``qradar`` such that the FQCN is ``ansible_example.community.plugins.module_utils.qradar``: .. code-block:: python from ansible.module_utils.basic import AnsibleModule from ansible.module_utils._text import to_text from ansible.module_utils.six.moves.urllib.parse import urlencode, quote_plus from ansible.module_utils.six.moves.urllib.error import HTTPError from ansible_collections.ansible_example.community.plugins.module_utils.qradar import QRadarRequest argspec = dict( name=dict(required=True, type='str'), state=dict(choices=['present', 'absent'], required=True), ) module = AnsibleModule( argument_spec=argspec, supports_check_mode=True ) qradar_request = QRadarRequest( module, headers={"Content-Type": "application/json"}, not_rest_data_keys=['state'] ) Note that importing something from an ``__init__.py`` file requires using the file name: .. code-block:: python from ansible_collections.namespace.collection_name.plugins.callback.__init__ import CustomBaseClass In the PowerShell example the ``module_util`` in question is called ``hyperv`` such that the FCQN is ``ansible_example.community.plugins.module_utils.hyperv``: .. code-block:: powershell #!powershell #AnsibleRequires -CSharpUtil Ansible.Basic #AnsibleRequires -PowerShell ansible_collections.ansible_example.community.plugins.module_utils.hyperv $spec = @{ name = @{ required = $true; type = "str" } state = @{ required = $true; choices = @("present", "absent") } } $module = [Ansible.Basic.AnsibleModule]::Create($args, $spec) Invoke-HyperVFunction -Name $module.Params.name $module.ExitJson() .. _collections_roles_dir: roles directory ---------------- Collection roles are mostly the same as existing roles, but with a couple of limitations: - Role names are now limited to contain only lowercase alphanumeric characters, plus ``_`` and start with an alpha character. - Roles in a collection cannot contain plugins any more. Plugins must live in the collection ``plugins`` directory tree. Each plugin is accessible to all roles in the collection. The directory name of the role is used as the role name. Therefore, the directory name must comply with the above role name rules. The collection import into Galaxy will fail if a role name does not comply with these rules. You can migrate 'traditional roles' into a collection but they must follow the rules above. You may need to rename roles if they don't conform. You will have to move or link any role-based plugins to the collection specific directories. .. note:: For roles imported into Galaxy directly from a GitHub repository, setting the ``role_name`` value in the role's metadata overrides the role name used by Galaxy. For collections, that value is ignored. When importing a collection, Galaxy uses the role directory as the name of the role and ignores the ``role_name`` metadata value. playbooks directory -------------------- TBD. .. _developing_collections_tests_directory: tests directory ---------------- Ansible Collections are tested much like Ansible itself, by using the `ansible-test` utility which is released as part of Ansible, version 2.9.0 and newer. Because Ansible Collections are tested using the same tooling as Ansible itself, via `ansible-test`, all Ansible developer documentation for testing is applicable for authoring Collections Tests with one key concept to keep in mind. See :ref:`testing_collections` for specific information on how to test collections with ``ansible-test``. When reading the :ref:`developing_testing` documentation, there will be content that applies to running Ansible from source code via a git clone, which is typical of an Ansible developer. However, it's not always typical for an Ansible Collection author to be running Ansible from source but instead from a stable release, and to create Collections it is not necessary to run Ansible from source. Therefore, when references of dealing with `ansible-test` binary paths, command completion, or environment variables are presented throughout the :ref:`developing_testing` documentation; keep in mind that it is not needed for Ansible Collection Testing because the act of installing the stable release of Ansible containing `ansible-test` is expected to setup those things for you. .. _creating_collections_skeleton: Creating a collection skeleton ------------------------------ To start a new collection: .. code-block:: bash collection_dir#> ansible-galaxy collection init my_namespace.my_collection .. note:: Both the namespace and collection names have strict requirements. See `Galaxy namespaces <https://galaxy.ansible.com/docs/contributing/namespaces.html#galaxy-namespaces>`_ on the Galaxy docsite for details. Once the skeleton exists, you can populate the directories with the content you want inside the collection. See `ansible-collections <https://github.com/ansible-collections/>`_ GitHub Org to get a better idea of what you can place inside a collection. .. _creating_collections: Creating collections ====================== To create a collection: #. Create a collection skeleton with the ``collection init`` command. See :ref:`creating_collections_skeleton` above. #. Add your content to the collection. #. Build the collection into a collection artifact with :ref:`ansible-galaxy collection build<building_collections>`. #. Publish the collection artifact to Galaxy with :ref:`ansible-galaxy collection publish<publishing_collections>`. A user can then install your collection on their systems. Currently the ``ansible-galaxy collection`` command implements the following sub commands: * ``init``: Create a basic collection skeleton based on the default template included with Ansible or your own template. * ``build``: Create a collection artifact that can be uploaded to Galaxy or your own repository. * ``publish``: Publish a built collection artifact to Galaxy. * ``install``: Install one or more collections. To learn more about the ``ansible-galaxy`` cli tool, see the :ref:`ansible-galaxy` man page. .. _docfragments_collections: Using documentation fragments in collections -------------------------------------------- To include documentation fragments in your collection: #. Create the documentation fragment: ``plugins/doc_fragments/fragment_name``. #. Refer to the documentation fragment with its FQCN. .. code-block:: yaml extends_documentation_fragment: - community.kubernetes.k8s_name_options - community.kubernetes.k8s_auth_options - community.kubernetes.k8s_resource_options - community.kubernetes.k8s_scale_options :ref:`module_docs_fragments` covers the basics for documentation fragments. The `kubernetes <https://github.com/ansible-collections/kubernetes>`_ collection includes a complete example. You can also share documentation fragments across collections with the FQCN. .. _building_collections: Building collections -------------------- To build a collection, run ``ansible-galaxy collection build`` from inside the root directory of the collection: .. code-block:: bash collection_dir#> ansible-galaxy collection build This creates a tarball of the built collection in the current directory which can be uploaded to Galaxy.:: my_collection/ ├── galaxy.yml ├── ... ├── my_namespace-my_collection-1.0.0.tar.gz └── ... .. note:: * Certain files and folders are excluded when building the collection artifact. See :ref:`ignoring_files_and_folders_collections` to exclude other files you would not want to distribute. * If you used the now-deprecated ``Mazer`` tool for any of your collections, delete any and all files it added to your :file:`releases/` directory before you build your collection with ``ansible-galaxy``. * The current Galaxy maximum tarball size is 2 MB. This tarball is mainly intended to upload to Galaxy as a distribution method, but you can use it directly to install the collection on target systems. .. _ignoring_files_and_folders_collections: Ignoring files and folders ^^^^^^^^^^^^^^^^^^^^^^^^^^ By default the build step will include all the files in the collection directory in the final build artifact except for the following: * ``galaxy.yml`` * ``*.pyc`` * ``*.retry`` * ``tests/output`` * previously built artifacts in the root directory * Various version control directories like ``.git/`` To exclude other files and folders when building the collection, you can set a list of file glob-like patterns in the ``build_ignore`` key in the collection's ``galaxy.yml`` file. These patterns use the following special characters for wildcard matching: * ``*``: Matches everything * ``?``: Matches any single character * ``[seq]``: Matches and character in seq * ``[!seq]``:Matches any character not in seq For example, if you wanted to exclude the :file:`sensitive` folder within the ``playbooks`` folder as well any ``.tar.gz`` archives you can set the following in your ``galaxy.yml`` file: .. code-block:: yaml build_ignore: - playbooks/sensitive - '*.tar.gz' .. note:: This feature is only supported when running ``ansible-galaxy collection build`` with Ansible 2.10 or newer. .. _trying_collection_locally: Trying collections locally -------------------------- You can try your collection locally by installing it from the tarball. The following will enable an adjacent playbook to access the collection: .. code-block:: bash ansible-galaxy collection install my_namespace-my_collection-1.0.0.tar.gz -p ./collections You should use one of the values configured in :ref:`COLLECTIONS_PATHS` for your path. This is also where Ansible itself will expect to find collections when attempting to use them. If you don't specify a path value, ``ansible-galaxy collection install`` installs the collection in the first path defined in :ref:`COLLECTIONS_PATHS`, which by default is ``~/.ansible/collections``. Next, try using the local collection inside a playbook. For examples and more details see :ref:`Using collections <using_collections>` .. _collections_scm_install: Installing collections from a git repository -------------------------------------------- You can also test a version of your collection in development by installing it from a git repository. .. code-block:: bash ansible-galaxy collection install git+https://github.com/org/repo.git,devel .. include:: ../shared_snippets/installing_collections_git_repo.txt .. _publishing_collections: Publishing collections ---------------------- You can publish collections to Galaxy using the ``ansible-galaxy collection publish`` command or the Galaxy UI itself. You need a namespace on Galaxy to upload your collection. See `Galaxy namespaces <https://galaxy.ansible.com/docs/contributing/namespaces.html#galaxy-namespaces>`_ on the Galaxy docsite for details. .. note:: Once you upload a version of a collection, you cannot delete or modify that version. Ensure that everything looks okay before you upload it. .. _galaxy_get_token: Getting your API token ^^^^^^^^^^^^^^^^^^^^^^ To upload your collection to Galaxy, you must first obtain an API token (``--token`` in the ``ansible-galaxy`` CLI command or ``token`` in the :file:`ansible.cfg` file under the ``galaxy_server`` section). The API token is a secret token used to protect your content. To get your API token: * For Galaxy, go to the `Galaxy profile preferences <https://galaxy.ansible.com/me/preferences>`_ page and click :guilabel:`API Key`. * For Automation Hub, go to https://cloud.redhat.com/ansible/automation-hub/token/ and click :guilabel:`Load token` from the version dropdown. Storing or using your API token ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Once you have retrieved your API token, you can store or use the token for collections in two ways: * Pass the token to the ``ansible-galaxy`` command using the ``--token``. * Specify the token within a Galaxy server list in your :file:`ansible.cfg` file. Using the ``token`` argument ............................ You can use the ``--token`` argument with the ``ansible-galaxy`` command (in conjunction with the ``--server`` argument or :ref:`GALAXY_SERVER` setting in your :file:`ansible.cfg` file). You cannot use ``apt-key`` with any servers defined in your :ref:`Galaxy server list <galaxy_server_config>`. .. code-block:: text ansible-galaxy collection publish ./geerlingguy-collection-1.2.3.tar.gz --token=<key goes here> Specify the token within a Galaxy server list ............................................. With this option, you configure one or more servers for Galaxy in your :file:`ansible.cfg` file under the ``galaxy_server_list`` section. For each server, you also configure the token. .. code-block:: ini [galaxy] server_list = release_galaxy [galaxy_server.release_galaxy] url=https://galaxy.ansible.com/ token=my_token See :ref:`galaxy_server_config` for complete details. .. _upload_collection_ansible_galaxy: Upload using ansible-galaxy ^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. note:: By default, ``ansible-galaxy`` uses https://galaxy.ansible.com as the Galaxy server (as listed in the :file:`ansible.cfg` file under :ref:`galaxy_server`). If you are only publishing your collection to Ansible Galaxy, you do not need any further configuration. If you are using Red Hat Automation Hub or any other Galaxy server, see :ref:`Configuring the ansible-galaxy client <galaxy_server_config>`. To upload the collection artifact with the ``ansible-galaxy`` command: .. code-block:: bash ansible-galaxy collection publish path/to/my_namespace-my_collection-1.0.0.tar.gz .. note:: The above command assumes you have retrieved and stored your API token as part of a Galaxy server list. See :ref:`galaxy_get_token` for details. The ``ansible-galaxy collection publish`` command triggers an import process, just as if you uploaded the collection through the Galaxy website. The command waits until the import process completes before reporting the status back. If you want to continue without waiting for the import result, use the ``--no-wait`` argument and manually look at the import progress in your `My Imports <https://galaxy.ansible.com/my-imports/>`_ page. .. _upload_collection_galaxy: Upload a collection from the Galaxy website ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To upload your collection artifact directly on Galaxy: #. Go to the `My Content <https://galaxy.ansible.com/my-content/namespaces>`_ page, and click the **Add Content** button on one of your namespaces. #. From the **Add Content** dialogue, click **Upload New Collection**, and select the collection archive file from your local filesystem. When uploading collections it doesn't matter which namespace you select. The collection will be uploaded to the namespace specified in the collection metadata in the ``galaxy.yml`` file. If you're not an owner of the namespace, the upload request will fail. Once Galaxy uploads and accepts a collection, you will be redirected to the **My Imports** page, which displays output from the import process, including any errors or warnings about the metadata and content contained in the collection. .. _collection_versions: Collection versions ------------------- Once you upload a version of a collection, you cannot delete or modify that version. Ensure that everything looks okay before uploading. The only way to change a collection is to release a new version. The latest version of a collection (by highest version number) will be the version displayed everywhere in Galaxy; however, users will still be able to download older versions. Collection versions use `Semantic Versioning <https://semver.org/>`_ for version numbers. Please read the official documentation for details and examples. In summary: * Increment major (for example: x in `x.y.z`) version number for an incompatible API change. * Increment minor (for example: y in `x.y.z`) version number for new functionality in a backwards compatible manner. * Increment patch (for example: z in `x.y.z`) version number for backwards compatible bug fixes. .. _migrate_to_collection: Migrating Ansible content to a different collection ==================================================== To migrate content from one collection to another, you need to create three PRs as follows: #. Create a PR against the old collection to remove the content. #. Create a PR against the new collection to add the files removed in step 1. #. Update the ``ansible/ansible:devel`` branch entries for all files moved. Removing the content from the old collection ---------------------------------------------- Create a PR against the old collection repo to remove the modules, module_utils, plugins, and docs_fragments related to this migration: #. If you are removing an action plugin, remove the corresponding module that contains the documentation. #. If you are removing a module, remove any corresponding action plugin that should stay with it. #. Remove any entries about removed plugins from ``meta/runtime.yml``. Ensure they are added into the new repo. #. Remove sanity ignore lines from ``tests/sanity/ignore\*.txt`` #. Remove associated integration tests from ``tests/integrations/targets/`` and unit tests from ``tests/units/plugins/``. #. if you are removing from content from ``community.general`` or ``community.network``, remove entries from ``.github/BOTMETA.yml``. #. Carefully review ``meta/runtime.yml`` for any entries you may need to remove or update, in particular deprecated entries. #. Update ``meta/runtime.yml`` to contain redirects for EVERY PLUGIN, pointing to the new collection name. #. If possible, do not yet add deprecation warnings to the new ``meta/runtime.yml`` entries, but only for a later major release. So the order should be: 1. Remove content, add redirects in 3.0.0; 2. Deprecate redirects in 4.0.0; 3. Set removal version to 5.0.0 or later. .. warning:: Maintainers for the old collection have to make sure that the PR is merged in a way that it does not break user experience and semantic versioning: #. A new version containing the merged PR must not be released before the collection the content has been moved to has been released again, with that content contained in it. Otherwise the redirects cannot work and users relying on that content will experience breakage. #. Once 1.0.0 of the collection from which the content has been removed has been released, such PRs can only be merged for a new **major** version (i.e. 2.0.0, 3.0.0, etc.). Adding the content to the new collection ----------------------------------------- Create a PR in the new collection to: #. Add ALL the files removed in first PR (from the old collection). #. If it is an action plugin, include the corresponding module with documentation. #. If it is a module, check if it has a corresponding action plugin that should move with it. #. Check ``meta/ `` for relevant updates to ``action_groups.yml`` and ``runtime.yml`` if they exist. #. Carefully check the moved ``tests/integration`` and ``tests/units`` and update for FQCN. #. Review ``tests/sanity/ignore-\*.txt`` entries. #. Update ``meta/runtime.yml``. Updating ``ansible/ansible:devel`` branch entries for all files moved ---------------------------------------------------------------------- Create a third PR on ``ansible/ansible`` repository to: #. Update ``lib/ansible/config/ansible_builtin_runtime.yml`` (the redirect entry). #. Update ``.github/BOTMETA.yml`` (the migrated_to entry) BOTMETA.yml ----------- The `BOTMETA.yml <https://github.com/ansible/ansible/blob/devel/.github/BOTMETA.yml>`_ in the ansible/ansible GitHub repository is the source of truth for: * ansibullbot Ansibulbot will know how to redirect existing issues and PRs to the new repo. The build process for docs.ansible.com will know where to find the module docs. .. code-block:: yaml $modules/monitoring/grafana/grafana_plugin.py: migrated_to: community.grafana $modules/monitoring/grafana/grafana_dashboard.py: migrated_to: community.grafana $modules/monitoring/grafana/grafana_datasource.py: migrated_to: community.grafana $plugins/callback/grafana_annotations.py: maintainers: $team_grafana labels: monitoring grafana migrated_to: community.grafana $plugins/doc_fragments/grafana.py: maintainers: $team_grafana labels: monitoring grafana migrated_to: community.grafana `Example PR <https://github.com/ansible/ansible/pull/66981/files>`_ * The ``migrated_to:`` key must be added explicitly for every *file*. You cannot add ``migrated_to`` at the directory level. This is to allow module and plugin webdocs to be redirected to the new collection docs. * ``migrated_to:`` MUST be added for every: * module * plugin * module_utils * contrib/inventory script * You do NOT need to add ``migrated_to`` for: * Unit tests * Integration tests * ReStructured Text docs (anything under ``docs/docsite/rst/``) * Files that never existed in ``ansible/ansible:devel`` .. _testing_collections: Testing collections =================== The main tool for testing collections is ``ansible-test``, Ansible's testing tool described in :ref:`developing_testing`. You can run several compile and sanity checks, as well as run unit and integration tests for plugins using ``ansible-test``. When you test collections, test against the ansible-base version(s) you are targeting. You must always execute ``ansible-test`` from the root directory of a collection. You can run ``ansible-test`` in Docker containers without installing any special requirements. The Ansible team uses this approach in Shippable both in the ansible/ansible GitHub repository and in the large community collections such as `community.general <https://github.com/ansible-collections/community.general/>`_ and `community.network <https://github.com/ansible-collections/community.network/>`_. The examples below demonstrate running tests in Docker containers. Compile and sanity tests ------------------------ To run all compile and sanity tests:: ansible-test sanity --docker default -v See :ref:`testing_compile` and :ref:`testing_sanity` for more information. See the :ref:`full list of sanity tests <all_sanity_tests>` for details on the sanity tests and how to fix identified issues. Unit tests ---------- You must place unit tests in the appropriate``tests/unit/plugins/`` directory. For example, you would place tests for ``plugins/module_utils/foo/bar.py`` in ``tests/unit/plugins/module_utils/foo/test_bar.py`` or ``tests/unit/plugins/module_utils/foo/bar/test_bar.py``. For examples, see the `unit tests in community.general <https://github.com/ansible-collections/community.general/tree/master/tests/unit/>`_. To run all unit tests for all supported Python versions:: ansible-test units --docker default -v To run all unit tests only for a specific Python version:: ansible-test units --docker default -v --python 3.6 To run only a specific unit test:: ansible-test units --docker default -v --python 3.6 tests/unit/plugins/module_utils/foo/test_bar.py You can specify Python requirements in the ``tests/unit/requirements.txt`` file. See :ref:`testing_units` for more information, especially on fixture files. Integration tests ----------------- You must place integration tests in the appropriate ``tests/integration/targets/`` directory. For module integration tests, you can use the module name alone. For example, you would place integration tests for ``plugins/modules/foo.py`` in a directory called ``tests/integration/targets/foo/``. For non-module plugin integration tests, you must add the plugin type to the directory name. For example, you would place integration tests for ``plugins/connections/bar.py`` in a directory called ``tests/integration/targets/connection_bar/``. For lookup plugins, the directory must be called ``lookup_foo``, for inventory plugins, ``inventory_foo``, and so on. You can write two different kinds of integration tests: * Ansible role tests run with ``ansible-playbook`` and validate various aspects of the module. They can depend on other integration tests (usually named ``prepare_bar`` or ``setup_bar``, which prepare a service or install a requirement named ``bar`` in order to test module ``foo``) to set-up required resources, such as installing required libraries or setting up server services. * ``runme.sh`` tests run directly as scripts. They can set up inventory files, and execute ``ansible-playbook`` or ``ansible-inventory`` with various settings. For examples, see the `integration tests in community.general <https://github.com/ansible-collections/community.general/tree/master/tests/integration/targets/>`_. See also :ref:`testing_integration` for more details. Since integration tests can install requirements, and set-up, start and stop services, we recommended running them in docker containers or otherwise restricted environments whenever possible. By default, ``ansible-test`` supports Docker images for several operating systems. See the `list of supported docker images <https://github.com/ansible/ansible/blob/devel/test/lib/ansible_test/_data/completion/docker.txt>`_ for all options. Use the ``default`` image mainly for platform-independent integration tests, such as those for cloud modules. The following examples use the ``centos8`` image. To execute all integration tests for a collection:: ansible-test integration --docker centos8 -v If you want more detailed output, run the command with ``-vvv`` instead of ``-v``. Alternatively, specify ``--retry-on-error`` to automatically re-run failed tests with higher verbosity levels. To execute only the integration tests in a specific directory:: ansible-test integration --docker centos8 -v connection_bar You can specify multiple target names. Each target name is the name of a directory in ``tests/integration/targets/``. .. _hacking_collections: Contributing to collections =========================== If you want to add functionality to an existing collection, modify a collection you are using to fix a bug, or change the behavior of a module in a collection, clone the git repository for that collection and make changes on a branch. You can combine changes to a collection with a local checkout of Ansible (``source hacking/env-setup``). This section describes the process for `community.general <https://github.com/ansible-collections/community.general/>`_. To contribute to other collections, replace the folder names ``community`` and ``general`` with the namespace and collection name of a different collection. We assume that you have included ``~/dev/ansible/collections/`` in :ref:`COLLECTIONS_PATHS`, and if that path mentions multiple directories, that you made sure that no other directory earlier in the search path contains a copy of ``community.general``. Create the directory ``~/dev/ansible/collections/ansible_collections/community``, and in it clone `the community.general Git repository <https://github.com/ansible-collections/community.general/>`_ or a fork of it into the folder ``general``:: mkdir -p ~/dev/ansible/collections/ansible_collections/community cd ~/dev/ansible/collections/ansible_collections/community git clone [email protected]:ansible-collections/community.general.git general If you clone a fork, add the original repository as a remote ``upstream``:: cd ~/dev/ansible/collections/ansible_collections/community/general git remote add upstream [email protected]:ansible-collections/community.general.git Now you can use this checkout of ``community.general`` in playbooks and roles with whichever version of Ansible you have installed locally, including a local checkout of the ``devel`` branch. For collections hosted in the ``ansible_collections`` GitHub org, create a branch and commit your changes on the branch. When you are done (remember to add tests, see :ref:`testing_collections`), push your changes to your fork of the collection and create a Pull Request. For other collections, especially for collections not hosted on GitHub, check the ``README.md`` of the collection for information on contributing to it. .. _collection_changelogs: Generating changelogs for a collection ====================================== We recommend that you use the `antsibull-changelog <https://github.com/ansible-community/antsibull-changelog>`_ tool to generate Ansible-compatible changelogs for your collection. The Ansible changelog uses the output of this tool to collate all the collections included in an Ansible release into one combined changelog for the release. .. note:: Ansible here refers to the Ansible 2.10 or later release that includes a curated set of collections. If your collection is part of Ansible but you are not using this tool, your collection should include the properly formatted ``changelog.yaml`` file or your changelogs will not be part of the combined Ansible CHANGELOG.rst and Porting Guide at release. See the `changlog.yaml format <https://github.com/ansible-community/antsibull-changelog/blob/main/docs/changelog.yaml-format.md>`_ for details. Understanding antsibull-changelog --------------------------------- The ``antsibull-changelog`` tool allows you to create and update changelogs for Ansible collections that are compatible with the combined Ansible changelogs. This is an update to the changelog generator used in prior Ansible releases. The tool adds three new changelog fragment categories: ``breaking_changes``, ``security_fixes`` and ``trivial``. The tool also generates the ``changelog.yaml`` file that Ansible uses to create the combined ``CHANGELOG.rst`` file and Porting Guide for the release. See :ref:`changelogs_how_to` and the `antsibull-changelog documentation <https://github.com/ansible-community/antsibull-changelog/tree/main/docs>`_ for complete details. .. note:: The collection maintainers set the changelog policy for their collections. See the individual collection contributing guidelines for complete details. Generating changelogs --------------------- To initialize changelog generation: #. Install ``antsibull-changelog``: :code:`pip install antsibull-changelog`. #. Initialize changelogs for your repository: :code:`antsibull-changelog init <path/to/your/collection>`. #. Optionally, edit the ``changelogs/config.yaml`` file to customize the location of the generated changelog ``.rst`` file or other options. See `Bootstrapping changelogs for collections <https://github.com/ansible-community/antsibull-changelog/blob/main/docs/changelogs.rst#bootstrapping-changelogs-for-collections>`_ for details. To generate changelogs from the changelog fragments you created: #. Optionally, validate your changelog fragments: :code:`antsibull-changelog lint`. #. Generate the changelog for your release: :code:`antsibull-changelog release [--version version_number]`. .. note:: Add the ``--reload-plugins`` option if you ran the ``antsibull-changelog release`` command previously and the version of the collection has not changed. ``antsibull-changelog`` caches the information on all plugins and does not update its cache until the collection version changes. Porting Guide entries ---------------------- The following changelog fragment categories are consumed by the Ansible changelog generator into the Ansible Porting Guide: * ``major_changes`` * ``breaking_changes`` * ``deprecated_features`` * ``removed_features`` .. seealso:: :ref:`collections` Learn how to install and use collections. :ref:`collections_galaxy_meta` Understand the collections metadata structure. :ref:`developing_modules_general` Learn about how to write Ansible modules `Mailing List <https://groups.google.com/group/ansible-devel>`_ The development mailing list `irc.freenode.net <http://irc.freenode.net>`_ #ansible IRC chat channel
closed
ansible/ansible
https://github.com/ansible/ansible
70,702
Cannot set inventory cache_timeout to 0
##### SUMMARY Setting `cache_timeout: 0` in an inventory plugin configuration has no effect. The default value is used instead. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `lib/ansible/plugins/inventory/__init__.py` ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.10.0b1.post0 config file = /export/home/orion-admin/ansible-boulder/ansible.cfg configured module search path = [u'/export/home/orion-admin/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /export/home/orion-admin/ansible/lib/ansible executable location = /export/home/orion-admin/ansible/bin/ansible python version = 2.7.5 (default, Apr 1 2020, 10:09:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] ``` also with ansible 2.9.10 ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ANSIBLE_PIPELINING(/export/home/orion-admin/ansible-boulder/ansible.cfg) = True ANSIBLE_SSH_ARGS(/export/home/orion-admin/ansible-boulder/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s -o ForwardX11=no -o ForwardAgent=no DEFAULT_BECOME(/export/home/orion-admin/ansible-boulder/ansible.cfg) = True DEFAULT_CALLBACK_WHITELIST(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'profile_tasks'] DEFAULT_FORKS(/export/home/orion-admin/ansible-boulder/ansible.cfg) = 25 DEFAULT_GATHERING(/export/home/orion-admin/ansible-boulder/ansible.cfg) = smart DEFAULT_HOST_LIST(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'/export/home/orion-admin/ansible-boulder/inventory'] DEFAULT_INVENTORY_PLUGIN_PATH(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'/export/home/orion-admin/ansible-boulder/plugins/inventory'] DEFAULT_JINJA2_EXTENSIONS(/export/home/orion-admin/ansible-boulder/ansible.cfg) = jinja2.ext.do DEFAULT_ROLES_PATH(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'/export/home/orion-admin/ansible-boulder/roles'] DEFAULT_TIMEOUT(/export/home/orion-admin/ansible-boulder/ansible.cfg) = 20 INJECT_FACTS_AS_VARS(/export/home/orion-admin/ansible-boulder/ansible.cfg) = True INVENTORY_ENABLED(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'cobbler', u'ini'] ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> EL 7.8. python 2.7.5 ##### STEPS TO REPRODUCE Use a caching inventory plugin (I'm testing with cobbler - https://github.com/ansible-collections/community.general/pull/627) and set cache_timeout to 0. Observe that it instead uses the default value. ##### EXPECTED RESULTS cache plugin _timeout is set to 0 The issue is the following two code segments: https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/inventory/__init__.py#L235 ``` if 'cache' in self._options and self.get_option('cache'): cache_option_keys = [('_uri', 'cache_connection'), ('_timeout', 'cache_timeout'), ('_prefix', 'cache_prefix')] cache_options = dict((opt[0], self.get_option(opt[1])) for opt in cache_option_keys if self.get_option(opt[1])) self._cache = get_cache_plugin(self.get_option('cache_plugin'), **cache_options) ``` and https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/inventory/__init__.py#L323 ``` cache_option_keys = [('_uri', 'cache_connection'), ('_timeout', 'cache_timeout'), ('_prefix', 'cache_prefix')] cache_options = dict((opt[0], self.get_option(opt[1])) for opt in cache_option_keys if self.get_option(opt[1])) ``` if `self.get_option(opt[1])` evaluates to `false`, which a value of `0` for `cache_timeout` does, then the option value is not passed on to cache_options.
https://github.com/ansible/ansible/issues/70702
https://github.com/ansible/ansible/pull/70977
a9eb8b04882669bd17cd82780c908863e7504886
3bec27dc34e946f5ea69e1d0651a8a22f7ab88db
2020-07-16T22:35:57Z
python
2020-08-04T16:54:28Z
lib/ansible/plugins/inventory/__init__.py
# (c) 2017, Red Hat, inc # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <https://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import hashlib import os import string from ansible.errors import AnsibleError, AnsibleParserError from ansible.inventory.group import to_safe_group_name as original_safe from ansible.parsing.utils.addresses import parse_address from ansible.plugins import AnsiblePlugin from ansible.plugins.cache import CachePluginAdjudicator as CacheObject from ansible.module_utils._text import to_bytes, to_native from ansible.module_utils.common._collections_compat import Mapping from ansible.module_utils.parsing.convert_bool import boolean from ansible.module_utils.six import string_types from ansible.template import Templar from ansible.utils.display import Display from ansible.utils.vars import combine_vars display = Display() # Helper methods def to_safe_group_name(name): # placeholder for backwards compat return original_safe(name, force=True, silent=True) def detect_range(line=None): ''' A helper function that checks a given host line to see if it contains a range pattern described in the docstring above. Returns True if the given line contains a pattern, else False. ''' return '[' in line def expand_hostname_range(line=None): ''' A helper function that expands a given line that contains a pattern specified in top docstring, and returns a list that consists of the expanded version. The '[' and ']' characters are used to maintain the pseudo-code appearance. They are replaced in this function with '|' to ease string splitting. References: https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#hosts-and-groups ''' all_hosts = [] if line: # A hostname such as db[1:6]-node is considered to consists # three parts: # head: 'db' # nrange: [1:6]; range() is a built-in. Can't use the name # tail: '-node' # Add support for multiple ranges in a host so: # db[01:10:3]node-[01:10] # - to do this we split off at the first [...] set, getting the list # of hosts and then repeat until none left. # - also add an optional third parameter which contains the step. (Default: 1) # so range can be [01:10:2] -> 01 03 05 07 09 (head, nrange, tail) = line.replace('[', '|', 1).replace(']', '|', 1).split('|') bounds = nrange.split(":") if len(bounds) != 2 and len(bounds) != 3: raise AnsibleError("host range must be begin:end or begin:end:step") beg = bounds[0] end = bounds[1] if len(bounds) == 2: step = 1 else: step = bounds[2] if not beg: beg = "0" if not end: raise AnsibleError("host range must specify end value") if beg[0] == '0' and len(beg) > 1: rlen = len(beg) # range length formatting hint if rlen != len(end): raise AnsibleError("host range must specify equal-length begin and end formats") def fill(x): return str(x).zfill(rlen) # range sequence else: fill = str try: i_beg = string.ascii_letters.index(beg) i_end = string.ascii_letters.index(end) if i_beg > i_end: raise AnsibleError("host range must have begin <= end") seq = list(string.ascii_letters[i_beg:i_end + 1:int(step)]) except ValueError: # not an alpha range seq = range(int(beg), int(end) + 1, int(step)) for rseq in seq: hname = ''.join((head, fill(rseq), tail)) if detect_range(hname): all_hosts.extend(expand_hostname_range(hname)) else: all_hosts.append(hname) return all_hosts def get_cache_plugin(plugin_name, **kwargs): try: cache = CacheObject(plugin_name, **kwargs) except AnsibleError as e: if 'fact_caching_connection' in to_native(e): raise AnsibleError("error, '%s' inventory cache plugin requires the one of the following to be set " "to a writeable directory path:\nansible.cfg:\n[default]: fact_caching_connection,\n" "[inventory]: cache_connection;\nEnvironment:\nANSIBLE_INVENTORY_CACHE_CONNECTION,\n" "ANSIBLE_CACHE_PLUGIN_CONNECTION." % plugin_name) else: raise e if plugin_name != 'memory' and kwargs and not getattr(cache._plugin, '_options', None): raise AnsibleError('Unable to use cache plugin {0} for inventory. Cache options were provided but may not reconcile ' 'correctly unless set via set_options. Refer to the porting guide if the plugin derives user settings ' 'from ansible.constants.'.format(plugin_name)) return cache class BaseInventoryPlugin(AnsiblePlugin): """ Parses an Inventory Source""" TYPE = 'generator' _sanitize_group_name = staticmethod(to_safe_group_name) def __init__(self): super(BaseInventoryPlugin, self).__init__() self._options = {} self.inventory = None self.display = display def parse(self, inventory, loader, path, cache=True): ''' Populates inventory from the given data. Raises an error on any parse failure :arg inventory: a copy of the previously accumulated inventory data, to be updated with any new data this plugin provides. The inventory can be empty if no other source/plugin ran successfully. :arg loader: a reference to the DataLoader, which can read in YAML and JSON files, it also has Vault support to automatically decrypt files. :arg path: the string that represents the 'inventory source', normally a path to a configuration file for this inventory, but it can also be a raw string for this plugin to consume :arg cache: a boolean that indicates if the plugin should use the cache or not you can ignore if this plugin does not implement caching. ''' self.loader = loader self.inventory = inventory self.templar = Templar(loader=loader) def verify_file(self, path): ''' Verify if file is usable by this plugin, base does minimal accessibility check :arg path: a string that was passed as an inventory source, it normally is a path to a config file, but this is not a requirement, it can also be parsed itself as the inventory data to process. So only call this base class if you expect it to be a file. ''' valid = False b_path = to_bytes(path, errors='surrogate_or_strict') if (os.path.exists(b_path) and os.access(b_path, os.R_OK)): valid = True else: self.display.vvv('Skipping due to inventory source not existing or not being readable by the current user') return valid def _populate_host_vars(self, hosts, variables, group=None, port=None): if not isinstance(variables, Mapping): raise AnsibleParserError("Invalid data from file, expected dictionary and got:\n\n%s" % to_native(variables)) for host in hosts: self.inventory.add_host(host, group=group, port=port) for k in variables: self.inventory.set_variable(host, k, variables[k]) def _read_config_data(self, path): ''' validate config and set options as appropriate :arg path: path to common yaml format config file for this plugin ''' config = {} try: # avoid loader cache so meta: refresh_inventory can pick up config changes # if we read more than once, fs cache should be good enough config = self.loader.load_from_file(path, cache=False) except Exception as e: raise AnsibleParserError(to_native(e)) # a plugin can be loaded via many different names with redirection- if so, we want to accept any of those names valid_names = getattr(self, '_redirected_names') or [self.NAME] if not config: # no data raise AnsibleParserError("%s is empty" % (to_native(path))) elif config.get('plugin') not in valid_names: # this is not my config file raise AnsibleParserError("Incorrect plugin name in file: %s" % config.get('plugin', 'none found')) elif not isinstance(config, Mapping): # configs are dictionaries raise AnsibleParserError('inventory source has invalid structure, it should be a dictionary, got: %s' % type(config)) self.set_options(direct=config) if 'cache' in self._options and self.get_option('cache'): cache_option_keys = [('_uri', 'cache_connection'), ('_timeout', 'cache_timeout'), ('_prefix', 'cache_prefix')] cache_options = dict((opt[0], self.get_option(opt[1])) for opt in cache_option_keys if self.get_option(opt[1])) self._cache = get_cache_plugin(self.get_option('cache_plugin'), **cache_options) return config def _consume_options(self, data): ''' update existing options from alternate configuration sources not normally used by Ansible. Many API libraries already have existing configuration sources, this allows plugin author to leverage them. :arg data: key/value pairs that correspond to configuration options for this plugin ''' for k in self._options: if k in data: self._options[k] = data.pop(k) def _expand_hostpattern(self, hostpattern): ''' Takes a single host pattern and returns a list of hostnames and an optional port number that applies to all of them. ''' # Can the given hostpattern be parsed as a host with an optional port # specification? try: (pattern, port) = parse_address(hostpattern, allow_ranges=True) except Exception: # not a recognizable host pattern pattern = hostpattern port = None # Once we have separated the pattern, we expand it into list of one or # more hostnames, depending on whether it contains any [x:y] ranges. if detect_range(pattern): hostnames = expand_hostname_range(pattern) else: hostnames = [pattern] return (hostnames, port) class BaseFileInventoryPlugin(BaseInventoryPlugin): """ Parses a File based Inventory Source""" TYPE = 'storage' def __init__(self): super(BaseFileInventoryPlugin, self).__init__() class DeprecatedCache(object): def __init__(self, real_cacheable): self.real_cacheable = real_cacheable def get(self, key): display.deprecated('InventoryModule should utilize self._cache as a dict instead of self.cache. ' 'When expecting a KeyError, use self._cache[key] instead of using self.cache.get(key). ' 'self._cache is a dictionary and will return a default value instead of raising a KeyError ' 'when the key does not exist', version='2.12', collection_name='ansible.builtin') return self.real_cacheable._cache[key] def set(self, key, value): display.deprecated('InventoryModule should utilize self._cache as a dict instead of self.cache. ' 'To set the self._cache dictionary, use self._cache[key] = value instead of self.cache.set(key, value). ' 'To force update the underlying cache plugin with the contents of self._cache before parse() is complete, ' 'call self.set_cache_plugin and it will use the self._cache dictionary to update the cache plugin', version='2.12', collection_name='ansible.builtin') self.real_cacheable._cache[key] = value self.real_cacheable.set_cache_plugin() def __getattr__(self, name): display.deprecated('InventoryModule should utilize self._cache instead of self.cache', version='2.12', collection_name='ansible.builtin') return self.real_cacheable._cache.__getattribute__(name) class Cacheable(object): _cache = CacheObject() @property def cache(self): return DeprecatedCache(self) def load_cache_plugin(self): plugin_name = self.get_option('cache_plugin') cache_option_keys = [('_uri', 'cache_connection'), ('_timeout', 'cache_timeout'), ('_prefix', 'cache_prefix')] cache_options = dict((opt[0], self.get_option(opt[1])) for opt in cache_option_keys if self.get_option(opt[1])) self._cache = get_cache_plugin(plugin_name, **cache_options) def get_cache_key(self, path): return "{0}_{1}".format(self.NAME, self._get_cache_prefix(path)) def _get_cache_prefix(self, path): ''' create predictable unique prefix for plugin/inventory ''' m = hashlib.sha1() m.update(to_bytes(self.NAME, errors='surrogate_or_strict')) d1 = m.hexdigest() n = hashlib.sha1() n.update(to_bytes(path, errors='surrogate_or_strict')) d2 = n.hexdigest() return 's_'.join([d1[:5], d2[:5]]) def clear_cache(self): self._cache.flush() def update_cache_if_changed(self): self._cache.update_cache_if_changed() def set_cache_plugin(self): self._cache.set_cache() class Constructable(object): def _compose(self, template, variables): ''' helper method for plugins to compose variables for Ansible based on jinja2 expression and inventory vars''' t = self.templar t.available_variables = variables return t.template('%s%s%s' % (t.environment.variable_start_string, template, t.environment.variable_end_string), disable_lookups=True) def _set_composite_vars(self, compose, variables, host, strict=False): ''' loops over compose entries to create vars for hosts ''' if compose and isinstance(compose, dict): for varname in compose: try: composite = self._compose(compose[varname], variables) except Exception as e: if strict: raise AnsibleError("Could not set %s for host %s: %s" % (varname, host, to_native(e))) continue self.inventory.set_variable(host, varname, composite) def _add_host_to_composed_groups(self, groups, variables, host, strict=False): ''' helper to create complex groups for plugins based on jinja2 conditionals, hosts that meet the conditional are added to group''' # process each 'group entry' if groups and isinstance(groups, dict): variables = combine_vars(variables, self.inventory.get_host(host).get_vars()) self.templar.available_variables = variables for group_name in groups: conditional = "{%% if %s %%} True {%% else %%} False {%% endif %%}" % groups[group_name] group_name = original_safe(group_name, force=True) try: result = boolean(self.templar.template(conditional)) except Exception as e: if strict: raise AnsibleParserError("Could not add host %s to group %s: %s" % (host, group_name, to_native(e))) continue if result: # ensure group exists, use sanitized name group_name = self.inventory.add_group(group_name) # add host to group self.inventory.add_child(group_name, host) def _add_host_to_keyed_groups(self, keys, variables, host, strict=False): ''' helper to create groups for plugins based on variable values and add the corresponding hosts to it''' if keys and isinstance(keys, list): for keyed in keys: if keyed and isinstance(keyed, dict): variables = combine_vars(variables, self.inventory.get_host(host).get_vars()) try: key = self._compose(keyed.get('key'), variables) except Exception as e: if strict: raise AnsibleParserError("Could not generate group for host %s from %s entry: %s" % (host, keyed.get('key'), to_native(e))) continue if key: prefix = keyed.get('prefix', '') sep = keyed.get('separator', '_') raw_parent_name = keyed.get('parent_group', None) if raw_parent_name: try: raw_parent_name = self.templar.template(raw_parent_name) except AnsibleError as e: if strict: raise AnsibleParserError("Could not generate parent group %s for group %s: %s" % (raw_parent_name, key, to_native(e))) continue new_raw_group_names = [] if isinstance(key, string_types): new_raw_group_names.append(key) elif isinstance(key, list): for name in key: new_raw_group_names.append(name) elif isinstance(key, Mapping): for (gname, gval) in key.items(): name = '%s%s%s' % (gname, sep, gval) new_raw_group_names.append(name) else: raise AnsibleParserError("Invalid group name format, expected a string or a list of them or dictionary, got: %s" % type(key)) for bare_name in new_raw_group_names: gname = self._sanitize_group_name('%s%s%s' % (prefix, sep, bare_name)) result_gname = self.inventory.add_group(gname) self.inventory.add_host(host, result_gname) if raw_parent_name: parent_name = self._sanitize_group_name(raw_parent_name) self.inventory.add_group(parent_name) self.inventory.add_child(parent_name, result_gname) else: # exclude case of empty list and dictionary, because these are valid constructions # simply no groups need to be constructed, but are still falsy if strict and key not in ([], {}): raise AnsibleParserError("No key or key resulted empty for %s in host %s, invalid entry" % (keyed.get('key'), host)) else: raise AnsibleParserError("Invalid keyed group entry, it must be a dictionary: %s " % keyed)
closed
ansible/ansible
https://github.com/ansible/ansible
70,702
Cannot set inventory cache_timeout to 0
##### SUMMARY Setting `cache_timeout: 0` in an inventory plugin configuration has no effect. The default value is used instead. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `lib/ansible/plugins/inventory/__init__.py` ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.10.0b1.post0 config file = /export/home/orion-admin/ansible-boulder/ansible.cfg configured module search path = [u'/export/home/orion-admin/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /export/home/orion-admin/ansible/lib/ansible executable location = /export/home/orion-admin/ansible/bin/ansible python version = 2.7.5 (default, Apr 1 2020, 10:09:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] ``` also with ansible 2.9.10 ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ANSIBLE_PIPELINING(/export/home/orion-admin/ansible-boulder/ansible.cfg) = True ANSIBLE_SSH_ARGS(/export/home/orion-admin/ansible-boulder/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s -o ForwardX11=no -o ForwardAgent=no DEFAULT_BECOME(/export/home/orion-admin/ansible-boulder/ansible.cfg) = True DEFAULT_CALLBACK_WHITELIST(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'profile_tasks'] DEFAULT_FORKS(/export/home/orion-admin/ansible-boulder/ansible.cfg) = 25 DEFAULT_GATHERING(/export/home/orion-admin/ansible-boulder/ansible.cfg) = smart DEFAULT_HOST_LIST(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'/export/home/orion-admin/ansible-boulder/inventory'] DEFAULT_INVENTORY_PLUGIN_PATH(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'/export/home/orion-admin/ansible-boulder/plugins/inventory'] DEFAULT_JINJA2_EXTENSIONS(/export/home/orion-admin/ansible-boulder/ansible.cfg) = jinja2.ext.do DEFAULT_ROLES_PATH(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'/export/home/orion-admin/ansible-boulder/roles'] DEFAULT_TIMEOUT(/export/home/orion-admin/ansible-boulder/ansible.cfg) = 20 INJECT_FACTS_AS_VARS(/export/home/orion-admin/ansible-boulder/ansible.cfg) = True INVENTORY_ENABLED(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'cobbler', u'ini'] ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> EL 7.8. python 2.7.5 ##### STEPS TO REPRODUCE Use a caching inventory plugin (I'm testing with cobbler - https://github.com/ansible-collections/community.general/pull/627) and set cache_timeout to 0. Observe that it instead uses the default value. ##### EXPECTED RESULTS cache plugin _timeout is set to 0 The issue is the following two code segments: https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/inventory/__init__.py#L235 ``` if 'cache' in self._options and self.get_option('cache'): cache_option_keys = [('_uri', 'cache_connection'), ('_timeout', 'cache_timeout'), ('_prefix', 'cache_prefix')] cache_options = dict((opt[0], self.get_option(opt[1])) for opt in cache_option_keys if self.get_option(opt[1])) self._cache = get_cache_plugin(self.get_option('cache_plugin'), **cache_options) ``` and https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/inventory/__init__.py#L323 ``` cache_option_keys = [('_uri', 'cache_connection'), ('_timeout', 'cache_timeout'), ('_prefix', 'cache_prefix')] cache_options = dict((opt[0], self.get_option(opt[1])) for opt in cache_option_keys if self.get_option(opt[1])) ``` if `self.get_option(opt[1])` evaluates to `false`, which a value of `0` for `cache_timeout` does, then the option value is not passed on to cache_options.
https://github.com/ansible/ansible/issues/70702
https://github.com/ansible/ansible/pull/70977
a9eb8b04882669bd17cd82780c908863e7504886
3bec27dc34e946f5ea69e1d0651a8a22f7ab88db
2020-07-16T22:35:57Z
python
2020-08-04T16:54:28Z
test/integration/targets/plugin_config_for_inventory/cache_plugins/none.py
closed
ansible/ansible
https://github.com/ansible/ansible
70,702
Cannot set inventory cache_timeout to 0
##### SUMMARY Setting `cache_timeout: 0` in an inventory plugin configuration has no effect. The default value is used instead. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `lib/ansible/plugins/inventory/__init__.py` ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.10.0b1.post0 config file = /export/home/orion-admin/ansible-boulder/ansible.cfg configured module search path = [u'/export/home/orion-admin/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /export/home/orion-admin/ansible/lib/ansible executable location = /export/home/orion-admin/ansible/bin/ansible python version = 2.7.5 (default, Apr 1 2020, 10:09:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] ``` also with ansible 2.9.10 ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ANSIBLE_PIPELINING(/export/home/orion-admin/ansible-boulder/ansible.cfg) = True ANSIBLE_SSH_ARGS(/export/home/orion-admin/ansible-boulder/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s -o ForwardX11=no -o ForwardAgent=no DEFAULT_BECOME(/export/home/orion-admin/ansible-boulder/ansible.cfg) = True DEFAULT_CALLBACK_WHITELIST(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'profile_tasks'] DEFAULT_FORKS(/export/home/orion-admin/ansible-boulder/ansible.cfg) = 25 DEFAULT_GATHERING(/export/home/orion-admin/ansible-boulder/ansible.cfg) = smart DEFAULT_HOST_LIST(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'/export/home/orion-admin/ansible-boulder/inventory'] DEFAULT_INVENTORY_PLUGIN_PATH(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'/export/home/orion-admin/ansible-boulder/plugins/inventory'] DEFAULT_JINJA2_EXTENSIONS(/export/home/orion-admin/ansible-boulder/ansible.cfg) = jinja2.ext.do DEFAULT_ROLES_PATH(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'/export/home/orion-admin/ansible-boulder/roles'] DEFAULT_TIMEOUT(/export/home/orion-admin/ansible-boulder/ansible.cfg) = 20 INJECT_FACTS_AS_VARS(/export/home/orion-admin/ansible-boulder/ansible.cfg) = True INVENTORY_ENABLED(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'cobbler', u'ini'] ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> EL 7.8. python 2.7.5 ##### STEPS TO REPRODUCE Use a caching inventory plugin (I'm testing with cobbler - https://github.com/ansible-collections/community.general/pull/627) and set cache_timeout to 0. Observe that it instead uses the default value. ##### EXPECTED RESULTS cache plugin _timeout is set to 0 The issue is the following two code segments: https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/inventory/__init__.py#L235 ``` if 'cache' in self._options and self.get_option('cache'): cache_option_keys = [('_uri', 'cache_connection'), ('_timeout', 'cache_timeout'), ('_prefix', 'cache_prefix')] cache_options = dict((opt[0], self.get_option(opt[1])) for opt in cache_option_keys if self.get_option(opt[1])) self._cache = get_cache_plugin(self.get_option('cache_plugin'), **cache_options) ``` and https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/inventory/__init__.py#L323 ``` cache_option_keys = [('_uri', 'cache_connection'), ('_timeout', 'cache_timeout'), ('_prefix', 'cache_prefix')] cache_options = dict((opt[0], self.get_option(opt[1])) for opt in cache_option_keys if self.get_option(opt[1])) ``` if `self.get_option(opt[1])` evaluates to `false`, which a value of `0` for `cache_timeout` does, then the option value is not passed on to cache_options.
https://github.com/ansible/ansible/issues/70702
https://github.com/ansible/ansible/pull/70977
a9eb8b04882669bd17cd82780c908863e7504886
3bec27dc34e946f5ea69e1d0651a8a22f7ab88db
2020-07-16T22:35:57Z
python
2020-08-04T16:54:28Z
test/integration/targets/plugin_config_for_inventory/config_with_parameter.yml
plugin: test_inventory departments: - paris
closed
ansible/ansible
https://github.com/ansible/ansible
70,702
Cannot set inventory cache_timeout to 0
##### SUMMARY Setting `cache_timeout: 0` in an inventory plugin configuration has no effect. The default value is used instead. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `lib/ansible/plugins/inventory/__init__.py` ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.10.0b1.post0 config file = /export/home/orion-admin/ansible-boulder/ansible.cfg configured module search path = [u'/export/home/orion-admin/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /export/home/orion-admin/ansible/lib/ansible executable location = /export/home/orion-admin/ansible/bin/ansible python version = 2.7.5 (default, Apr 1 2020, 10:09:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] ``` also with ansible 2.9.10 ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ANSIBLE_PIPELINING(/export/home/orion-admin/ansible-boulder/ansible.cfg) = True ANSIBLE_SSH_ARGS(/export/home/orion-admin/ansible-boulder/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s -o ForwardX11=no -o ForwardAgent=no DEFAULT_BECOME(/export/home/orion-admin/ansible-boulder/ansible.cfg) = True DEFAULT_CALLBACK_WHITELIST(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'profile_tasks'] DEFAULT_FORKS(/export/home/orion-admin/ansible-boulder/ansible.cfg) = 25 DEFAULT_GATHERING(/export/home/orion-admin/ansible-boulder/ansible.cfg) = smart DEFAULT_HOST_LIST(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'/export/home/orion-admin/ansible-boulder/inventory'] DEFAULT_INVENTORY_PLUGIN_PATH(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'/export/home/orion-admin/ansible-boulder/plugins/inventory'] DEFAULT_JINJA2_EXTENSIONS(/export/home/orion-admin/ansible-boulder/ansible.cfg) = jinja2.ext.do DEFAULT_ROLES_PATH(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'/export/home/orion-admin/ansible-boulder/roles'] DEFAULT_TIMEOUT(/export/home/orion-admin/ansible-boulder/ansible.cfg) = 20 INJECT_FACTS_AS_VARS(/export/home/orion-admin/ansible-boulder/ansible.cfg) = True INVENTORY_ENABLED(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'cobbler', u'ini'] ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> EL 7.8. python 2.7.5 ##### STEPS TO REPRODUCE Use a caching inventory plugin (I'm testing with cobbler - https://github.com/ansible-collections/community.general/pull/627) and set cache_timeout to 0. Observe that it instead uses the default value. ##### EXPECTED RESULTS cache plugin _timeout is set to 0 The issue is the following two code segments: https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/inventory/__init__.py#L235 ``` if 'cache' in self._options and self.get_option('cache'): cache_option_keys = [('_uri', 'cache_connection'), ('_timeout', 'cache_timeout'), ('_prefix', 'cache_prefix')] cache_options = dict((opt[0], self.get_option(opt[1])) for opt in cache_option_keys if self.get_option(opt[1])) self._cache = get_cache_plugin(self.get_option('cache_plugin'), **cache_options) ``` and https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/inventory/__init__.py#L323 ``` cache_option_keys = [('_uri', 'cache_connection'), ('_timeout', 'cache_timeout'), ('_prefix', 'cache_prefix')] cache_options = dict((opt[0], self.get_option(opt[1])) for opt in cache_option_keys if self.get_option(opt[1])) ``` if `self.get_option(opt[1])` evaluates to `false`, which a value of `0` for `cache_timeout` does, then the option value is not passed on to cache_options.
https://github.com/ansible/ansible/issues/70702
https://github.com/ansible/ansible/pull/70977
a9eb8b04882669bd17cd82780c908863e7504886
3bec27dc34e946f5ea69e1d0651a8a22f7ab88db
2020-07-16T22:35:57Z
python
2020-08-04T16:54:28Z
test/integration/targets/plugin_config_for_inventory/runme.sh
#!/usr/bin/env bash set -o errexit -o nounset -o xtrace export ANSIBLE_INVENTORY_PLUGINS=./ export ANSIBLE_INVENTORY_ENABLED=test_inventory # check default values ansible-inventory --list -i ./config_without_parameter.yml --export | \ env python -c "import json, sys; inv = json.loads(sys.stdin.read()); \ assert set(inv['_meta']['hostvars']['test_host']['departments']) == set(['seine-et-marne', 'haute-garonne'])" # check values ansible-inventory --list -i ./config_with_parameter.yml --export | \ env python -c "import json, sys; inv = json.loads(sys.stdin.read()); \ assert set(inv['_meta']['hostvars']['test_host']['departments']) == set(['paris'])"
closed
ansible/ansible
https://github.com/ansible/ansible
70,702
Cannot set inventory cache_timeout to 0
##### SUMMARY Setting `cache_timeout: 0` in an inventory plugin configuration has no effect. The default value is used instead. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `lib/ansible/plugins/inventory/__init__.py` ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.10.0b1.post0 config file = /export/home/orion-admin/ansible-boulder/ansible.cfg configured module search path = [u'/export/home/orion-admin/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /export/home/orion-admin/ansible/lib/ansible executable location = /export/home/orion-admin/ansible/bin/ansible python version = 2.7.5 (default, Apr 1 2020, 10:09:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] ``` also with ansible 2.9.10 ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ANSIBLE_PIPELINING(/export/home/orion-admin/ansible-boulder/ansible.cfg) = True ANSIBLE_SSH_ARGS(/export/home/orion-admin/ansible-boulder/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s -o ForwardX11=no -o ForwardAgent=no DEFAULT_BECOME(/export/home/orion-admin/ansible-boulder/ansible.cfg) = True DEFAULT_CALLBACK_WHITELIST(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'profile_tasks'] DEFAULT_FORKS(/export/home/orion-admin/ansible-boulder/ansible.cfg) = 25 DEFAULT_GATHERING(/export/home/orion-admin/ansible-boulder/ansible.cfg) = smart DEFAULT_HOST_LIST(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'/export/home/orion-admin/ansible-boulder/inventory'] DEFAULT_INVENTORY_PLUGIN_PATH(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'/export/home/orion-admin/ansible-boulder/plugins/inventory'] DEFAULT_JINJA2_EXTENSIONS(/export/home/orion-admin/ansible-boulder/ansible.cfg) = jinja2.ext.do DEFAULT_ROLES_PATH(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'/export/home/orion-admin/ansible-boulder/roles'] DEFAULT_TIMEOUT(/export/home/orion-admin/ansible-boulder/ansible.cfg) = 20 INJECT_FACTS_AS_VARS(/export/home/orion-admin/ansible-boulder/ansible.cfg) = True INVENTORY_ENABLED(/export/home/orion-admin/ansible-boulder/ansible.cfg) = [u'cobbler', u'ini'] ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> EL 7.8. python 2.7.5 ##### STEPS TO REPRODUCE Use a caching inventory plugin (I'm testing with cobbler - https://github.com/ansible-collections/community.general/pull/627) and set cache_timeout to 0. Observe that it instead uses the default value. ##### EXPECTED RESULTS cache plugin _timeout is set to 0 The issue is the following two code segments: https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/inventory/__init__.py#L235 ``` if 'cache' in self._options and self.get_option('cache'): cache_option_keys = [('_uri', 'cache_connection'), ('_timeout', 'cache_timeout'), ('_prefix', 'cache_prefix')] cache_options = dict((opt[0], self.get_option(opt[1])) for opt in cache_option_keys if self.get_option(opt[1])) self._cache = get_cache_plugin(self.get_option('cache_plugin'), **cache_options) ``` and https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/inventory/__init__.py#L323 ``` cache_option_keys = [('_uri', 'cache_connection'), ('_timeout', 'cache_timeout'), ('_prefix', 'cache_prefix')] cache_options = dict((opt[0], self.get_option(opt[1])) for opt in cache_option_keys if self.get_option(opt[1])) ``` if `self.get_option(opt[1])` evaluates to `false`, which a value of `0` for `cache_timeout` does, then the option value is not passed on to cache_options.
https://github.com/ansible/ansible/issues/70702
https://github.com/ansible/ansible/pull/70977
a9eb8b04882669bd17cd82780c908863e7504886
3bec27dc34e946f5ea69e1d0651a8a22f7ab88db
2020-07-16T22:35:57Z
python
2020-08-04T16:54:28Z
test/integration/targets/plugin_config_for_inventory/test_inventory.py
from __future__ import (absolute_import, division, print_function) __metaclass__ = type DOCUMENTATION = ''' name: test_inventory plugin_type: inventory authors: - Pierre-Louis Bonicoli (@pilou-) short_description: test inventory description: - test inventory (fetch parameters using config API) options: departments: description: test parameter type: list default: - seine-et-marne - haute-garonne required: False ''' EXAMPLES = ''' # Example command line: ansible-inventory --list -i test_inventory.yml plugin: test_inventory departments: - paris ''' from ansible.plugins.inventory import BaseInventoryPlugin class InventoryModule(BaseInventoryPlugin): NAME = 'test_inventory' def verify_file(self, path): return True def parse(self, inventory, loader, path, cache=True): super(InventoryModule, self).parse(inventory, loader, path) self._read_config_data(path=path) departments = self.get_option('departments') group = 'test_group' host = 'test_host' self.inventory.add_group(group) self.inventory.add_host(group=group, host=host) self.inventory.set_variable(host, 'departments', departments)
closed
ansible/ansible
https://github.com/ansible/ansible
63,378
Module Find, Contains option Regex bug ?
Hello, I found a strange return with a module find and contains option, with the python regex "\Z" https://docs.python.org/2/library/re.html \Z --> Matches only at the end of the string. I need to check the result of the last string if it's OK or KO ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME module find ##### ANSIBLE VERSION ansible 2.8.0 config file = None configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /tmp/ansible/ansible executable location = ansible python version = 2.7.5 ##### OS / ENVIRONMENT Red Hat Enterprise Linux Server release 7.3 (Maipo) ##### STEPS TO REPRODUCE create a file /tmp/test.log with : > > 01/01- OK > 01/02- OK > 01/03- KO > 01/04- OK playbook : ``` --- - name: check if last line is OK hosts: localhost become: true tasks: - name: check find: paths: /tmp/ pattern: 'test.log' age: -30d age_stamp: mtime contains: ".*OK$\\Z" use_regex: yes ``` ps : i try with contains: '.*OK$\Z' same result ##### EXPECTED RESULTS the end of last line is OK so we need to have = return "matched": 1, the end of last line is KO so we need to have = return "matched": 0, regex check --> https://regex101.com/r/hR9BB8/1 ##### ACTUAL RESULTS IF my regex is ".*OK$" (without \Z) , I get "matched": 1 but i don't check the last line and same result if the last line is KO output : `ok: [localhost] => { "changed": false, "examined": 138, "files": [], "invocation": { "module_args": { "age": "-30d", "age_stamp": "mtime", "contains": ".*FIN OK$\\Z", "depth": null, "excludes": null, "file_type": "file", "follow": false, "get_checksum": false, "hidden": false, "paths": [ "/tmp/" ], "pattern": "test.log", "patterns": [ "test.log" ], "recurse": false, "size": null, "use_regex": true } }, "matched": 0, "msg": "" }`
https://github.com/ansible/ansible/issues/63378
https://github.com/ansible/ansible/pull/71083
5ca3aec3c4bd03d8619325034ba12a34293276ef
810a9a55930951e547529a44e46c6f1829355fbf
2019-10-11T10:43:12Z
python
2020-08-04T17:49:45Z
changelogs/fragments/63378_find_module_regex_whole_file.yml
closed
ansible/ansible
https://github.com/ansible/ansible
63,378
Module Find, Contains option Regex bug ?
Hello, I found a strange return with a module find and contains option, with the python regex "\Z" https://docs.python.org/2/library/re.html \Z --> Matches only at the end of the string. I need to check the result of the last string if it's OK or KO ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME module find ##### ANSIBLE VERSION ansible 2.8.0 config file = None configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /tmp/ansible/ansible executable location = ansible python version = 2.7.5 ##### OS / ENVIRONMENT Red Hat Enterprise Linux Server release 7.3 (Maipo) ##### STEPS TO REPRODUCE create a file /tmp/test.log with : > > 01/01- OK > 01/02- OK > 01/03- KO > 01/04- OK playbook : ``` --- - name: check if last line is OK hosts: localhost become: true tasks: - name: check find: paths: /tmp/ pattern: 'test.log' age: -30d age_stamp: mtime contains: ".*OK$\\Z" use_regex: yes ``` ps : i try with contains: '.*OK$\Z' same result ##### EXPECTED RESULTS the end of last line is OK so we need to have = return "matched": 1, the end of last line is KO so we need to have = return "matched": 0, regex check --> https://regex101.com/r/hR9BB8/1 ##### ACTUAL RESULTS IF my regex is ".*OK$" (without \Z) , I get "matched": 1 but i don't check the last line and same result if the last line is KO output : `ok: [localhost] => { "changed": false, "examined": 138, "files": [], "invocation": { "module_args": { "age": "-30d", "age_stamp": "mtime", "contains": ".*FIN OK$\\Z", "depth": null, "excludes": null, "file_type": "file", "follow": false, "get_checksum": false, "hidden": false, "paths": [ "/tmp/" ], "pattern": "test.log", "patterns": [ "test.log" ], "recurse": false, "size": null, "use_regex": true } }, "matched": 0, "msg": "" }`
https://github.com/ansible/ansible/issues/63378
https://github.com/ansible/ansible/pull/71083
5ca3aec3c4bd03d8619325034ba12a34293276ef
810a9a55930951e547529a44e46c6f1829355fbf
2019-10-11T10:43:12Z
python
2020-08-04T17:49:45Z
lib/ansible/modules/find.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright: (c) 2014, Ruggero Marchei <[email protected]> # Copyright: (c) 2015, Brian Coca <[email protected]> # Copyright: (c) 2016-2017, Konstantin Shalygin <[email protected]> # Copyright: (c) 2017, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type DOCUMENTATION = r''' --- module: find author: Brian Coca (@bcoca) version_added: "2.0" short_description: Return a list of files based on specific criteria description: - Return a list of files based on specific criteria. Multiple criteria are AND'd together. - For Windows targets, use the M(win_find) module instead. options: age: description: - Select files whose age is equal to or greater than the specified time. - Use a negative age to find files equal to or less than the specified time. - You can choose seconds, minutes, hours, days, or weeks by specifying the first letter of any of those words (e.g., "1w"). type: str patterns: default: '*' description: - One or more (shell or regex) patterns, which type is controlled by C(use_regex) option. - The patterns restrict the list of files to be returned to those whose basenames match at least one of the patterns specified. Multiple patterns can be specified using a list. - The pattern is matched against the file base name, excluding the directory. - When using regexen, the pattern MUST match the ENTIRE file name, not just parts of it. So if you are looking to match all files ending in .default, you'd need to use '.*\.default' as a regexp and not just '\.default'. - This parameter expects a list, which can be either comma separated or YAML. If any of the patterns contain a comma, make sure to put them in a list to avoid splitting the patterns in undesirable ways. type: list aliases: [ pattern ] elements: str excludes: description: - One or more (shell or regex) patterns, which type is controlled by C(use_regex) option. - Items whose basenames match an C(excludes) pattern are culled from C(patterns) matches. Multiple patterns can be specified using a list. type: list aliases: [ exclude ] version_added: "2.5" elements: str contains: description: - A regular expression or pattern which should be matched against the file content. type: str paths: description: - List of paths of directories to search. All paths must be fully qualified. type: list required: true aliases: [ name, path ] elements: str file_type: description: - Type of file to select. - The 'link' and 'any' choices were added in Ansible 2.3. type: str choices: [ any, directory, file, link ] default: file recurse: description: - If target is a directory, recursively descend into the directory looking for files. type: bool default: no size: description: - Select files whose size is equal to or greater than the specified size. - Use a negative size to find files equal to or less than the specified size. - Unqualified values are in bytes but b, k, m, g, and t can be appended to specify bytes, kilobytes, megabytes, gigabytes, and terabytes, respectively. - Size is not evaluated for directories. type: str age_stamp: description: - Choose the file property against which we compare age. type: str choices: [ atime, ctime, mtime ] default: mtime hidden: description: - Set this to C(yes) to include hidden files, otherwise they will be ignored. type: bool default: no follow: description: - Set this to C(yes) to follow symlinks in path for systems with python 2.6+. type: bool default: no get_checksum: description: - Set this to C(yes) to retrieve a file's SHA1 checksum. type: bool default: no use_regex: description: - If C(no), the patterns are file globs (shell). - If C(yes), they are python regexes. type: bool default: no depth: description: - Set the maximum number of levels to descend into. - Setting recurse to C(no) will override this value, which is effectively depth 1. - Default is unlimited depth. type: int version_added: "2.6" seealso: - module: win_find ''' EXAMPLES = r''' - name: Recursively find /tmp files older than 2 days find: paths: /tmp age: 2d recurse: yes - name: Recursively find /tmp files older than 4 weeks and equal or greater than 1 megabyte find: paths: /tmp age: 4w size: 1m recurse: yes - name: Recursively find /var/tmp files with last access time greater than 3600 seconds find: paths: /var/tmp age: 3600 age_stamp: atime recurse: yes - name: Find /var/log files equal or greater than 10 megabytes ending with .old or .log.gz find: paths: /var/log patterns: '*.old,*.log.gz' size: 10m # Note that YAML double quotes require escaping backslashes but yaml single quotes do not. - name: Find /var/log files equal or greater than 10 megabytes ending with .old or .log.gz via regex find: paths: /var/log patterns: "^.*?\\.(?:old|log\\.gz)$" size: 10m use_regex: yes - name: Find /var/log all directories, exclude nginx and mysql find: paths: /var/log recurse: no file_type: directory excludes: 'nginx,mysql' # When using patterns that contain a comma, make sure they are formatted as lists to avoid splitting the pattern - name: Use a single pattern that contains a comma formatted as a list find: paths: /var/log file_type: file use_regex: yes patterns: ['^_[0-9]{2,4}_.*.log$'] - name: Use multiple patterns that contain a comma formatted as a YAML list find: paths: /var/log file_type: file use_regex: yes patterns: - '^_[0-9]{2,4}_.*.log$' - '^[a-z]{1,5}_.*log$' ''' RETURN = r''' files: description: All matches found with the specified criteria (see stat module for full output of each dictionary) returned: success type: list sample: [ { path: "/var/tmp/test1", mode: "0644", "...": "...", checksum: 16fac7be61a6e4591a33ef4b729c5c3302307523 }, { path: "/var/tmp/test2", "...": "..." }, ] matched: description: Number of matches returned: success type: int sample: 14 examined: description: Number of filesystem objects looked at returned: success type: int sample: 34 ''' import fnmatch import grp import os import pwd import re import stat import time from ansible.module_utils.basic import AnsibleModule def pfilter(f, patterns=None, excludes=None, use_regex=False): '''filter using glob patterns''' if not patterns and not excludes: return True if use_regex: if patterns and not excludes: for p in patterns: r = re.compile(p) if r.match(f): return True elif patterns and excludes: for p in patterns: r = re.compile(p) if r.match(f): for e in excludes: r = re.compile(e) if r.match(f): return False return True else: if patterns and not excludes: for p in patterns: if fnmatch.fnmatch(f, p): return True elif patterns and excludes: for p in patterns: if fnmatch.fnmatch(f, p): for e in excludes: if fnmatch.fnmatch(f, e): return False return True return False def agefilter(st, now, age, timestamp): '''filter files older than age''' if age is None: return True elif age >= 0 and now - st.__getattribute__("st_%s" % timestamp) >= abs(age): return True elif age < 0 and now - st.__getattribute__("st_%s" % timestamp) <= abs(age): return True return False def sizefilter(st, size): '''filter files greater than size''' if size is None: return True elif size >= 0 and st.st_size >= abs(size): return True elif size < 0 and st.st_size <= abs(size): return True return False def contentfilter(fsname, pattern): """ Filter files which contain the given expression :arg fsname: Filename to scan for lines matching a pattern :arg pattern: Pattern to look for inside of line :rtype: bool :returns: True if one of the lines in fsname matches the pattern. Otherwise False """ if pattern is None: return True prog = re.compile(pattern) try: with open(fsname) as f: for line in f: if prog.match(line): return True except Exception: pass return False def statinfo(st): pw_name = "" gr_name = "" try: # user data pw_name = pwd.getpwuid(st.st_uid).pw_name except Exception: pass try: # group data gr_name = grp.getgrgid(st.st_gid).gr_name except Exception: pass return { 'mode': "%04o" % stat.S_IMODE(st.st_mode), 'isdir': stat.S_ISDIR(st.st_mode), 'ischr': stat.S_ISCHR(st.st_mode), 'isblk': stat.S_ISBLK(st.st_mode), 'isreg': stat.S_ISREG(st.st_mode), 'isfifo': stat.S_ISFIFO(st.st_mode), 'islnk': stat.S_ISLNK(st.st_mode), 'issock': stat.S_ISSOCK(st.st_mode), 'uid': st.st_uid, 'gid': st.st_gid, 'size': st.st_size, 'inode': st.st_ino, 'dev': st.st_dev, 'nlink': st.st_nlink, 'atime': st.st_atime, 'mtime': st.st_mtime, 'ctime': st.st_ctime, 'gr_name': gr_name, 'pw_name': pw_name, 'wusr': bool(st.st_mode & stat.S_IWUSR), 'rusr': bool(st.st_mode & stat.S_IRUSR), 'xusr': bool(st.st_mode & stat.S_IXUSR), 'wgrp': bool(st.st_mode & stat.S_IWGRP), 'rgrp': bool(st.st_mode & stat.S_IRGRP), 'xgrp': bool(st.st_mode & stat.S_IXGRP), 'woth': bool(st.st_mode & stat.S_IWOTH), 'roth': bool(st.st_mode & stat.S_IROTH), 'xoth': bool(st.st_mode & stat.S_IXOTH), 'isuid': bool(st.st_mode & stat.S_ISUID), 'isgid': bool(st.st_mode & stat.S_ISGID), } def main(): module = AnsibleModule( argument_spec=dict( paths=dict(type='list', required=True, aliases=['name', 'path'], elements='str'), patterns=dict(type='list', default=['*'], aliases=['pattern'], elements='str'), excludes=dict(type='list', aliases=['exclude'], elements='str'), contains=dict(type='str'), file_type=dict(type='str', default="file", choices=['any', 'directory', 'file', 'link']), age=dict(type='str'), age_stamp=dict(type='str', default="mtime", choices=['atime', 'ctime', 'mtime']), size=dict(type='str'), recurse=dict(type='bool', default=False), hidden=dict(type='bool', default=False), follow=dict(type='bool', default=False), get_checksum=dict(type='bool', default=False), use_regex=dict(type='bool', default=False), depth=dict(type='int'), ), supports_check_mode=True, ) params = module.params filelist = [] if params['age'] is None: age = None else: # convert age to seconds: m = re.match(r"^(-?\d+)(s|m|h|d|w)?$", params['age'].lower()) seconds_per_unit = {"s": 1, "m": 60, "h": 3600, "d": 86400, "w": 604800} if m: age = int(m.group(1)) * seconds_per_unit.get(m.group(2), 1) else: module.fail_json(age=params['age'], msg="failed to process age") if params['size'] is None: size = None else: # convert size to bytes: m = re.match(r"^(-?\d+)(b|k|m|g|t)?$", params['size'].lower()) bytes_per_unit = {"b": 1, "k": 1024, "m": 1024**2, "g": 1024**3, "t": 1024**4} if m: size = int(m.group(1)) * bytes_per_unit.get(m.group(2), 1) else: module.fail_json(size=params['size'], msg="failed to process size") now = time.time() msg = '' looked = 0 for npath in params['paths']: npath = os.path.expanduser(os.path.expandvars(npath)) if os.path.isdir(npath): for root, dirs, files in os.walk(npath, followlinks=params['follow']): looked = looked + len(files) + len(dirs) for fsobj in (files + dirs): fsname = os.path.normpath(os.path.join(root, fsobj)) if params['depth']: wpath = npath.rstrip(os.path.sep) + os.path.sep depth = int(fsname.count(os.path.sep)) - int(wpath.count(os.path.sep)) + 1 if depth > params['depth']: continue if os.path.basename(fsname).startswith('.') and not params['hidden']: continue try: st = os.lstat(fsname) except Exception: msg += "%s was skipped as it does not seem to be a valid file or it cannot be accessed\n" % fsname continue r = {'path': fsname} if params['file_type'] == 'any': if pfilter(fsobj, params['patterns'], params['excludes'], params['use_regex']) and agefilter(st, now, age, params['age_stamp']): r.update(statinfo(st)) if stat.S_ISREG(st.st_mode) and params['get_checksum']: r['checksum'] = module.sha1(fsname) filelist.append(r) elif stat.S_ISDIR(st.st_mode) and params['file_type'] == 'directory': if pfilter(fsobj, params['patterns'], params['excludes'], params['use_regex']) and agefilter(st, now, age, params['age_stamp']): r.update(statinfo(st)) filelist.append(r) elif stat.S_ISREG(st.st_mode) and params['file_type'] == 'file': if pfilter(fsobj, params['patterns'], params['excludes'], params['use_regex']) and \ agefilter(st, now, age, params['age_stamp']) and \ sizefilter(st, size) and contentfilter(fsname, params['contains']): r.update(statinfo(st)) if params['get_checksum']: r['checksum'] = module.sha1(fsname) filelist.append(r) elif stat.S_ISLNK(st.st_mode) and params['file_type'] == 'link': if pfilter(fsobj, params['patterns'], params['excludes'], params['use_regex']) and agefilter(st, now, age, params['age_stamp']): r.update(statinfo(st)) filelist.append(r) if not params['recurse']: break else: msg += "%s was skipped as it does not seem to be a valid directory or it cannot be accessed\n" % npath matched = len(filelist) module.exit_json(files=filelist, changed=False, msg=msg, matched=matched, examined=looked) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
63,378
Module Find, Contains option Regex bug ?
Hello, I found a strange return with a module find and contains option, with the python regex "\Z" https://docs.python.org/2/library/re.html \Z --> Matches only at the end of the string. I need to check the result of the last string if it's OK or KO ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME module find ##### ANSIBLE VERSION ansible 2.8.0 config file = None configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /tmp/ansible/ansible executable location = ansible python version = 2.7.5 ##### OS / ENVIRONMENT Red Hat Enterprise Linux Server release 7.3 (Maipo) ##### STEPS TO REPRODUCE create a file /tmp/test.log with : > > 01/01- OK > 01/02- OK > 01/03- KO > 01/04- OK playbook : ``` --- - name: check if last line is OK hosts: localhost become: true tasks: - name: check find: paths: /tmp/ pattern: 'test.log' age: -30d age_stamp: mtime contains: ".*OK$\\Z" use_regex: yes ``` ps : i try with contains: '.*OK$\Z' same result ##### EXPECTED RESULTS the end of last line is OK so we need to have = return "matched": 1, the end of last line is KO so we need to have = return "matched": 0, regex check --> https://regex101.com/r/hR9BB8/1 ##### ACTUAL RESULTS IF my regex is ".*OK$" (without \Z) , I get "matched": 1 but i don't check the last line and same result if the last line is KO output : `ok: [localhost] => { "changed": false, "examined": 138, "files": [], "invocation": { "module_args": { "age": "-30d", "age_stamp": "mtime", "contains": ".*FIN OK$\\Z", "depth": null, "excludes": null, "file_type": "file", "follow": false, "get_checksum": false, "hidden": false, "paths": [ "/tmp/" ], "pattern": "test.log", "patterns": [ "test.log" ], "recurse": false, "size": null, "use_regex": true } }, "matched": 0, "msg": "" }`
https://github.com/ansible/ansible/issues/63378
https://github.com/ansible/ansible/pull/71083
5ca3aec3c4bd03d8619325034ba12a34293276ef
810a9a55930951e547529a44e46c6f1829355fbf
2019-10-11T10:43:12Z
python
2020-08-04T17:49:45Z
test/integration/targets/find/files/a.txt
closed
ansible/ansible
https://github.com/ansible/ansible
63,378
Module Find, Contains option Regex bug ?
Hello, I found a strange return with a module find and contains option, with the python regex "\Z" https://docs.python.org/2/library/re.html \Z --> Matches only at the end of the string. I need to check the result of the last string if it's OK or KO ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME module find ##### ANSIBLE VERSION ansible 2.8.0 config file = None configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /tmp/ansible/ansible executable location = ansible python version = 2.7.5 ##### OS / ENVIRONMENT Red Hat Enterprise Linux Server release 7.3 (Maipo) ##### STEPS TO REPRODUCE create a file /tmp/test.log with : > > 01/01- OK > 01/02- OK > 01/03- KO > 01/04- OK playbook : ``` --- - name: check if last line is OK hosts: localhost become: true tasks: - name: check find: paths: /tmp/ pattern: 'test.log' age: -30d age_stamp: mtime contains: ".*OK$\\Z" use_regex: yes ``` ps : i try with contains: '.*OK$\Z' same result ##### EXPECTED RESULTS the end of last line is OK so we need to have = return "matched": 1, the end of last line is KO so we need to have = return "matched": 0, regex check --> https://regex101.com/r/hR9BB8/1 ##### ACTUAL RESULTS IF my regex is ".*OK$" (without \Z) , I get "matched": 1 but i don't check the last line and same result if the last line is KO output : `ok: [localhost] => { "changed": false, "examined": 138, "files": [], "invocation": { "module_args": { "age": "-30d", "age_stamp": "mtime", "contains": ".*FIN OK$\\Z", "depth": null, "excludes": null, "file_type": "file", "follow": false, "get_checksum": false, "hidden": false, "paths": [ "/tmp/" ], "pattern": "test.log", "patterns": [ "test.log" ], "recurse": false, "size": null, "use_regex": true } }, "matched": 0, "msg": "" }`
https://github.com/ansible/ansible/issues/63378
https://github.com/ansible/ansible/pull/71083
5ca3aec3c4bd03d8619325034ba12a34293276ef
810a9a55930951e547529a44e46c6f1829355fbf
2019-10-11T10:43:12Z
python
2020-08-04T17:49:45Z
test/integration/targets/find/files/log.txt
closed
ansible/ansible
https://github.com/ansible/ansible
63,378
Module Find, Contains option Regex bug ?
Hello, I found a strange return with a module find and contains option, with the python regex "\Z" https://docs.python.org/2/library/re.html \Z --> Matches only at the end of the string. I need to check the result of the last string if it's OK or KO ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME module find ##### ANSIBLE VERSION ansible 2.8.0 config file = None configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /tmp/ansible/ansible executable location = ansible python version = 2.7.5 ##### OS / ENVIRONMENT Red Hat Enterprise Linux Server release 7.3 (Maipo) ##### STEPS TO REPRODUCE create a file /tmp/test.log with : > > 01/01- OK > 01/02- OK > 01/03- KO > 01/04- OK playbook : ``` --- - name: check if last line is OK hosts: localhost become: true tasks: - name: check find: paths: /tmp/ pattern: 'test.log' age: -30d age_stamp: mtime contains: ".*OK$\\Z" use_regex: yes ``` ps : i try with contains: '.*OK$\Z' same result ##### EXPECTED RESULTS the end of last line is OK so we need to have = return "matched": 1, the end of last line is KO so we need to have = return "matched": 0, regex check --> https://regex101.com/r/hR9BB8/1 ##### ACTUAL RESULTS IF my regex is ".*OK$" (without \Z) , I get "matched": 1 but i don't check the last line and same result if the last line is KO output : `ok: [localhost] => { "changed": false, "examined": 138, "files": [], "invocation": { "module_args": { "age": "-30d", "age_stamp": "mtime", "contains": ".*FIN OK$\\Z", "depth": null, "excludes": null, "file_type": "file", "follow": false, "get_checksum": false, "hidden": false, "paths": [ "/tmp/" ], "pattern": "test.log", "patterns": [ "test.log" ], "recurse": false, "size": null, "use_regex": true } }, "matched": 0, "msg": "" }`
https://github.com/ansible/ansible/issues/63378
https://github.com/ansible/ansible/pull/71083
5ca3aec3c4bd03d8619325034ba12a34293276ef
810a9a55930951e547529a44e46c6f1829355fbf
2019-10-11T10:43:12Z
python
2020-08-04T17:49:45Z
test/integration/targets/find/tasks/main.yml
# Test code for the find module. # (c) 2017, James Tanner <[email protected]> # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. - set_fact: output_dir_test={{output_dir}}/test_find - name: make sure our testing sub-directory does not exist file: path: "{{ output_dir_test }}" state: absent - name: create our testing sub-directory file: path: "{{ output_dir_test }}" state: directory ## ## find ## - name: make some directories file: path: "{{ output_dir_test }}/{{ item }}" state: directory with_items: - a/b/c/d - e/f/g/h - name: make some files copy: dest: "{{ output_dir_test }}/{{ item }}" content: 'data' with_items: - a/1.txt - a/b/2.jpg - a/b/c/3 - a/b/c/d/4.xml - e/5.json - e/f/6.swp - e/f/g/7.img - e/f/g/h/8.ogg - name: find the directories find: paths: "{{ output_dir_test }}" file_type: directory recurse: yes register: find_test0 - debug: var=find_test0 - name: validate directory results assert: that: - 'find_test0.changed is defined' - 'find_test0.examined is defined' - 'find_test0.files is defined' - 'find_test0.matched is defined' - 'find_test0.msg is defined' - 'find_test0.matched == 8' - 'find_test0.files | length == 8' - name: find the xml and img files find: paths: "{{ output_dir_test }}" file_type: file patterns: "*.xml,*.img" recurse: yes register: find_test1 - debug: var=find_test1 - name: validate directory results assert: that: - 'find_test1.matched == 2' - 'find_test1.files | length == 2' - name: find the xml file find: paths: "{{ output_dir_test }}" patterns: "*.xml" recurse: yes register: find_test2 - debug: var=find_test2 - name: validate gr_name and pw_name are defined assert: that: - 'find_test2.matched == 1' - 'find_test2.files[0].pw_name is defined' - 'find_test2.files[0].gr_name is defined' - name: find the xml file with empty excludes find: paths: "{{ output_dir_test }}" patterns: "*.xml" recurse: yes excludes: [] register: find_test3 - debug: var=find_test3 - name: validate gr_name and pw_name are defined assert: that: - 'find_test3.matched == 1' - 'find_test3.files[0].pw_name is defined' - 'find_test3.files[0].gr_name is defined'
closed
ansible/ansible
https://github.com/ansible/ansible
70,648
Use `chmod` instead of `setfacl` on macOS when becoming an unprivileged user
##### SUMMARY Support become unprivileged user on macOS without `allow_world_readable_tmpfiles=true` ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME action, shell, and copy ##### ADDITIONAL INFORMATION When using become where the login user and the become user are both unprivileged, Ansible attempts to securely change file permissions to only allow the become user to read the files. It uses `setfacl` and falls back to `chown`. Both of these fail on macOS, as there is no `setfacl` command and `chown` is not permitted. macOS actually does provide the same functionality as `setfacl`, using `chmod +a`. The syntax is a bit different from `setfacl`, but it should be able to do what Ansible needs.
https://github.com/ansible/ansible/issues/70648
https://github.com/ansible/ansible/pull/70785
79f7104556b3052e9e2d0c095ec2e4e0b1e61a92
0d7c144ce44cd40ffa7c109a027d0927961d6a63
2020-07-14T21:01:19Z
python
2020-08-04T18:32:48Z
changelogs/fragments/macos-chmod-acl.yml
closed
ansible/ansible
https://github.com/ansible/ansible
70,648
Use `chmod` instead of `setfacl` on macOS when becoming an unprivileged user
##### SUMMARY Support become unprivileged user on macOS without `allow_world_readable_tmpfiles=true` ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME action, shell, and copy ##### ADDITIONAL INFORMATION When using become where the login user and the become user are both unprivileged, Ansible attempts to securely change file permissions to only allow the become user to read the files. It uses `setfacl` and falls back to `chown`. Both of these fail on macOS, as there is no `setfacl` command and `chown` is not permitted. macOS actually does provide the same functionality as `setfacl`, using `chmod +a`. The syntax is a bit different from `setfacl`, but it should be able to do what Ansible needs.
https://github.com/ansible/ansible/issues/70648
https://github.com/ansible/ansible/pull/70785
79f7104556b3052e9e2d0c095ec2e4e0b1e61a92
0d7c144ce44cd40ffa7c109a027d0927961d6a63
2020-07-14T21:01:19Z
python
2020-08-04T18:32:48Z
docs/docsite/rst/user_guide/become.rst
.. _become: ****************************************** Understanding privilege escalation: become ****************************************** Ansible uses existing privilege escalation systems to execute tasks with root privileges or with another user's permissions. Because this feature allows you to 'become' another user, different from the user that logged into the machine (remote user), we call it ``become``. The ``become`` keyword leverages existing privilege escalation tools like `sudo`, `su`, `pfexec`, `doas`, `pbrun`, `dzdo`, `ksu`, `runas`, `machinectl` and others. .. contents:: :local: Using become ============ You can control the use of ``become`` with play or task directives, connection variables, or at the command line. If you set privilege escalation properties in multiple ways, review the :ref:`general precedence rules<general_precedence_rules>` to understand which settings will be used. A full list of all become plugins that are included in Ansible can be found in the :ref:`become_plugin_list`. Become directives ----------------- You can set the directives that control ``become`` at the play or task level. You can override these by setting connection variables, which often differ from one host to another. These variables and directives are independent. For example, setting ``become_user`` does not set ``become``. become set to ``yes`` to activate privilege escalation. become_user set to user with desired privileges — the user you `become`, NOT the user you login as. Does NOT imply ``become: yes``, to allow it to be set at host level. Default value is ``root``. become_method (at play or task level) overrides the default method set in ansible.cfg, set to use any of the :ref:`become_plugins`. become_flags (at play or task level) permit the use of specific flags for the tasks or role. One common use is to change the user to nobody when the shell is set to nologin. Added in Ansible 2.2. For example, to manage a system service (which requires ``root`` privileges) when connected as a non-``root`` user, you can use the default value of ``become_user`` (``root``): .. code-block:: yaml - name: Ensure the httpd service is running service: name: httpd state: started become: yes To run a command as the ``apache`` user: .. code-block:: yaml - name: Run a command as the apache user command: somecommand become: yes become_user: apache To do something as the ``nobody`` user when the shell is nologin: .. code-block:: yaml - name: Run a command as nobody command: somecommand become: yes become_method: su become_user: nobody become_flags: '-s /bin/sh' To specify a password for sudo, run ``ansible-playbook`` with ``--ask-become-pass`` (``-K`` for short). If you run a playbook utilizing ``become`` and the playbook seems to hang, most likely it is stuck at the privilege escalation prompt. Stop it with `CTRL-c`, then execute the playbook with ``-K`` and the appropriate password. Become connection variables --------------------------- You can define different ``become`` options for each managed node or group. You can define these variables in inventory or use them as normal variables. ansible_become overrides the ``become`` directive, decides if privilege escalation is used or not. ansible_become_method which privilege escalation method should be used ansible_become_user set the user you become through privilege escalation; does not imply ``ansible_become: yes`` ansible_become_password set the privilege escalation password. See :ref:`playbooks_vault` for details on how to avoid having secrets in plain text ansible_common_remote_group determines if Ansible should try to ``chgrp`` its temporary files to a group if ``setfacl`` and ``chown`` both fail. See `Risks of becoming an unprivileged user`_ for more information. Added in version 2.10. For example, if you want to run all tasks as ``root`` on a server named ``webserver``, but you can only connect as the ``manager`` user, you could use an inventory entry like this: .. code-block:: text webserver ansible_user=manager ansible_become=yes .. note:: The variables defined above are generic for all become plugins but plugin specific ones can also be set instead. Please see the documentation for each plugin for a list of all options the plugin has and how they can be defined. A full list of become plugins in Ansible can be found at :ref:`become_plugins`. Become command-line options --------------------------- --ask-become-pass, -K ask for privilege escalation password; does not imply become will be used. Note that this password will be used for all hosts. --become, -b run operations with become (no password implied) --become-method=BECOME_METHOD privilege escalation method to use (default=sudo), valid choices: [ sudo | su | pbrun | pfexec | doas | dzdo | ksu | runas | machinectl ] --become-user=BECOME_USER run operations as this user (default=root), does not imply --become/-b Risks and limitations of become =============================== Although privilege escalation is mostly intuitive, there are a few limitations on how it works. Users should be aware of these to avoid surprises. Risks of becoming an unprivileged user -------------------------------------- Ansible modules are executed on the remote machine by first substituting the parameters into the module file, then copying the file to the remote machine, and finally executing it there. Everything is fine if the module file is executed without using ``become``, when the ``become_user`` is root, or when the connection to the remote machine is made as root. In these cases Ansible creates the module file with permissions that only allow reading by the user and root, or only allow reading by the unprivileged user being switched to. However, when both the connection user and the ``become_user`` are unprivileged, the module file is written as the user that Ansible connects as (the ``remote_user``), but the file needs to be readable by the user Ansible is set to ``become``. The details of how Ansible solves this can vary based on platform. However, on POSIX systems, Ansible solves this problem in the following way: First, if :command:`setfacl` is installed and available in the remote ``PATH``, and the temporary directory on the remote host is mounted with POSIX.1e filesystem ACL support, Ansible will use POSIX ACLs to share the module file with the second unprivileged user. Next, if POSIX ACLs are **not** available or :command:`setfacl` could not be run, Ansible will attempt to change ownership of the module file using :command:`chown` for systems which support doing so as an unprivileged user. New in Ansible 2.10, if the :command:`chown` fails, Ansible will then check the value of the configuration setting ``ansible_common_remote_group``. Many systems will allow a given user to change the group ownership of a file to a group the user is in. As a result, if the second unprivileged user (the ``become_user``) has a UNIX group in common with the user Ansible is connected as (the ``remote_user``), and if ``ansible_common_remote_group`` is defined to be that group, Ansible can try to change the group ownership of the module file to that group by using :command:`chgrp`, thereby likely making it readable to the ``become_user``. At this point, if ``ansible_common_remote_group`` was defined and a :command:`chgrp` was attempted and returned successfully, Ansible assumes (but, importantly, does not check) that the new group ownership is enough and does not fall back further. That is, Ansible **does not check** that the ``become_user`` does in fact share a group with the ``remote_user``; so long as the command exits successfully, Ansible considers the result successful and does not proceed to check ``allow_world_readable_tmpfiles`` per below. If ``ansible_common_remote_group`` is **not** set and the chown above it failed, or if ``ansible_common_remote_group`` *is* set but the :command:`chgrp` (or following group-permissions :command:`chmod`) returned a non-successful exit code, Ansible will lastly check the value of ``allow_world_readable_tmpfiles``. If this is set, Ansible will place the module file in a world-readable temporary directory, with world-readable permissions to allow the ``become_user`` (and incidentally any other user on the system) to read the contents of the file. **If any of the parameters passed to the module are sensitive in nature, and you do not trust the remote machines, then this is a potential security risk.** Once the module is done executing, Ansible deletes the temporary file. Several ways exist to avoid the above logic flow entirely: * Use `pipelining`. When pipelining is enabled, Ansible does not save the module to a temporary file on the client. Instead it pipes the module to the remote python interpreter's stdin. Pipelining does not work for python modules involving file transfer (for example: :ref:`copy <copy_module>`, :ref:`fetch <fetch_module>`, :ref:`template <template_module>`), or for non-python modules. * Avoid becoming an unprivileged user. Temporary files are protected by UNIX file permissions when you ``become`` root or do not use ``become``. In Ansible 2.1 and above, UNIX file permissions are also secure if you make the connection to the managed machine as root and then use ``become`` to access an unprivileged account. .. warning:: Although the Solaris ZFS filesystem has filesystem ACLs, the ACLs are not POSIX.1e filesystem acls (they are NFSv4 ACLs instead). Ansible cannot use these ACLs to manage its temp file permissions so you may have to resort to ``allow_world_readable_tmpfiles`` if the remote machines use ZFS. .. versionchanged:: 2.1 Ansible makes it hard to unknowingly use ``become`` insecurely. Starting in Ansible 2.1, Ansible defaults to issuing an error if it cannot execute securely with ``become``. If you cannot use pipelining or POSIX ACLs, must connect as an unprivileged user, must use ``become`` to execute as a different unprivileged user, and decide that your managed nodes are secure enough for the modules you want to run there to be world readable, you can turn on ``allow_world_readable_tmpfiles`` in the :file:`ansible.cfg` file. Setting ``allow_world_readable_tmpfiles`` will change this from an error into a warning and allow the task to run as it did prior to 2.1. .. versionchanged:: 2.10 Ansible 2.10 introduces the above-mentioned ``ansible_common_remote_group`` fallback. As mentioned above, if enabled, it is used when ``remote_user`` and ``become_user`` are both unprivileged users. Refer to the text above for details on when this fallback happens. .. warning:: As mentioned above, if ``ansible_common_remote_group`` and ``allow_world_readable_tmpfiles`` are both enabled, it is unlikely that the world-readable fallback will ever trigger, and yet Ansible might still be unable to access the module file. This is because after the group ownership change is successful, Ansible does not fall back any further, and also does not do any check to ensure that the ``become_user`` is actually a member of the "common group". This is a design decision made by the fact that doing such a check would require another round-trip connection to the remote machine, which is a time-expensive operation. Ansible does, however, emit a warning in this case. Not supported by all connection plugins --------------------------------------- Privilege escalation methods must also be supported by the connection plugin used. Most connection plugins will warn if they do not support become. Some will just ignore it as they always run as root (jail, chroot, etc). Only one method may be enabled per host --------------------------------------- Methods cannot be chained. You cannot use ``sudo /bin/su -`` to become a user, you need to have privileges to run the command as that user in sudo or be able to su directly to it (the same for pbrun, pfexec or other supported methods). Privilege escalation must be general ------------------------------------ You cannot limit privilege escalation permissions to certain commands. Ansible does not always use a specific command to do something but runs modules (code) from a temporary file name which changes every time. If you have '/sbin/service' or '/bin/chmod' as the allowed commands this will fail with ansible as those paths won't match with the temporary file that Ansible creates to run the module. If you have security rules that constrain your sudo/pbrun/doas environment to running specific command paths only, use Ansible from a special account that does not have this constraint, or use :ref:`ansible_tower` to manage indirect access to SSH credentials. May not access environment variables populated by pamd_systemd -------------------------------------------------------------- For most Linux distributions using ``systemd`` as their init, the default methods used by ``become`` do not open a new "session", in the sense of systemd. Because the ``pam_systemd`` module will not fully initialize a new session, you might have surprises compared to a normal session opened through ssh: some environment variables set by ``pam_systemd``, most notably ``XDG_RUNTIME_DIR``, are not populated for the new user and instead inherited or just emptied. This might cause trouble when trying to invoke systemd commands that depend on ``XDG_RUNTIME_DIR`` to access the bus: .. code-block:: console $ echo $XDG_RUNTIME_DIR $ systemctl --user status Failed to connect to bus: Permission denied To force ``become`` to open a new systemd session that goes through ``pam_systemd``, you can use ``become_method: machinectl``. For more information, see `this systemd issue <https://github.com/systemd/systemd/issues/825#issuecomment-127917622>`_. .. _become_network: Become and network automation ============================= As of version 2.6, Ansible supports ``become`` for privilege escalation (entering ``enable`` mode or privileged EXEC mode) on all Ansible-maintained network platforms that support ``enable`` mode. Using ``become`` replaces the ``authorize`` and ``auth_pass`` options in a ``provider`` dictionary. You must set the connection type to either ``connection: network_cli`` or ``connection: httpapi`` to use ``become`` for privilege escalation on network devices. Check the :ref:`platform_options` and :ref:`network_modules` documentation for details. You can use escalated privileges on only the specific tasks that need them, on an entire play, or on all plays. Adding ``become: yes`` and ``become_method: enable`` instructs Ansible to enter ``enable`` mode before executing the task, play, or playbook where those parameters are set. If you see this error message, the task that generated it requires ``enable`` mode to succeed: .. code-block:: console Invalid input (privileged mode required) To set ``enable`` mode for a specific task, add ``become`` at the task level: .. code-block:: yaml - name: Gather facts (eos) eos_facts: gather_subset: - "!hardware" become: yes become_method: enable To set enable mode for all tasks in a single play, add ``become`` at the play level: .. code-block:: yaml - hosts: eos-switches become: yes become_method: enable tasks: - name: Gather facts (eos) eos_facts: gather_subset: - "!hardware" Setting enable mode for all tasks --------------------------------- Often you wish for all tasks in all plays to run using privilege mode, that is best achieved by using ``group_vars``: **group_vars/eos.yml** .. code-block:: yaml ansible_connection: network_cli ansible_network_os: eos ansible_user: myuser ansible_become: yes ansible_become_method: enable Passwords for enable mode ^^^^^^^^^^^^^^^^^^^^^^^^^ If you need a password to enter ``enable`` mode, you can specify it in one of two ways: * providing the :option:`--ask-become-pass <ansible-playbook --ask-become-pass>` command line option * setting the ``ansible_become_password`` connection variable .. warning:: As a reminder passwords should never be stored in plain text. For information on encrypting your passwords and other secrets with Ansible Vault, see :ref:`vault`. authorize and auth_pass ----------------------- Ansible still supports ``enable`` mode with ``connection: local`` for legacy network playbooks. To enter ``enable`` mode with ``connection: local``, use the module options ``authorize`` and ``auth_pass``: .. code-block:: yaml - hosts: eos-switches ansible_connection: local tasks: - name: Gather facts (eos) eos_facts: gather_subset: - "!hardware" provider: authorize: yes auth_pass: " {{ secret_auth_pass }}" We recommend updating your playbooks to use ``become`` for network-device ``enable`` mode consistently. The use of ``authorize`` and of ``provider`` dictionaries will be deprecated in future. Check the :ref:`platform_options` and :ref:`network_modules` documentation for details. .. _become_windows: Become and Windows ================== Since Ansible 2.3, ``become`` can be used on Windows hosts through the ``runas`` method. Become on Windows uses the same inventory setup and invocation arguments as ``become`` on a non-Windows host, so the setup and variable names are the same as what is defined in this document. While ``become`` can be used to assume the identity of another user, there are other uses for it with Windows hosts. One important use is to bypass some of the limitations that are imposed when running on WinRM, such as constrained network delegation or accessing forbidden system calls like the WUA API. You can use ``become`` with the same user as ``ansible_user`` to bypass these limitations and run commands that are not normally accessible in a WinRM session. Administrative rights --------------------- Many tasks in Windows require administrative privileges to complete. When using the ``runas`` become method, Ansible will attempt to run the module with the full privileges that are available to the remote user. If it fails to elevate the user token, it will continue to use the limited token during execution. A user must have the ``SeDebugPrivilege`` to run a become process with elevated privileges. This privilege is assigned to Administrators by default. If the debug privilege is not available, the become process will run with a limited set of privileges and groups. To determine the type of token that Ansible was able to get, run the following task: .. code-block:: yaml - win_whoami: become: yes The output will look something similar to the below: .. code-block:: ansible-output ok: [windows] => { "account": { "account_name": "vagrant-domain", "domain_name": "DOMAIN", "sid": "S-1-5-21-3088887838-4058132883-1884671576-1105", "type": "User" }, "authentication_package": "Kerberos", "changed": false, "dns_domain_name": "DOMAIN.LOCAL", "groups": [ { "account_name": "Administrators", "attributes": [ "Mandatory", "Enabled by default", "Enabled", "Owner" ], "domain_name": "BUILTIN", "sid": "S-1-5-32-544", "type": "Alias" }, { "account_name": "INTERACTIVE", "attributes": [ "Mandatory", "Enabled by default", "Enabled" ], "domain_name": "NT AUTHORITY", "sid": "S-1-5-4", "type": "WellKnownGroup" }, ], "impersonation_level": "SecurityAnonymous", "label": { "account_name": "High Mandatory Level", "domain_name": "Mandatory Label", "sid": "S-1-16-12288", "type": "Label" }, "login_domain": "DOMAIN", "login_time": "2018-11-18T20:35:01.9696884+00:00", "logon_id": 114196830, "logon_server": "DC01", "logon_type": "Interactive", "privileges": { "SeBackupPrivilege": "disabled", "SeChangeNotifyPrivilege": "enabled-by-default", "SeCreateGlobalPrivilege": "enabled-by-default", "SeCreatePagefilePrivilege": "disabled", "SeCreateSymbolicLinkPrivilege": "disabled", "SeDebugPrivilege": "enabled", "SeDelegateSessionUserImpersonatePrivilege": "disabled", "SeImpersonatePrivilege": "enabled-by-default", "SeIncreaseBasePriorityPrivilege": "disabled", "SeIncreaseQuotaPrivilege": "disabled", "SeIncreaseWorkingSetPrivilege": "disabled", "SeLoadDriverPrivilege": "disabled", "SeManageVolumePrivilege": "disabled", "SeProfileSingleProcessPrivilege": "disabled", "SeRemoteShutdownPrivilege": "disabled", "SeRestorePrivilege": "disabled", "SeSecurityPrivilege": "disabled", "SeShutdownPrivilege": "disabled", "SeSystemEnvironmentPrivilege": "disabled", "SeSystemProfilePrivilege": "disabled", "SeSystemtimePrivilege": "disabled", "SeTakeOwnershipPrivilege": "disabled", "SeTimeZonePrivilege": "disabled", "SeUndockPrivilege": "disabled" }, "rights": [ "SeNetworkLogonRight", "SeBatchLogonRight", "SeInteractiveLogonRight", "SeRemoteInteractiveLogonRight" ], "token_type": "TokenPrimary", "upn": "[email protected]", "user_flags": [] } Under the ``label`` key, the ``account_name`` entry determines whether the user has Administrative rights. Here are the labels that can be returned and what they represent: * ``Medium``: Ansible failed to get an elevated token and ran under a limited token. Only a subset of the privileges assigned to user are available during the module execution and the user does not have administrative rights. * ``High``: An elevated token was used and all the privileges assigned to the user are available during the module execution. * ``System``: The ``NT AUTHORITY\System`` account is used and has the highest level of privileges available. The output will also show the list of privileges that have been granted to the user. When the privilege value is ``disabled``, the privilege is assigned to the logon token but has not been enabled. In most scenarios these privileges are automatically enabled when required. If running on a version of Ansible that is older than 2.5 or the normal ``runas`` escalation process fails, an elevated token can be retrieved by: * Set the ``become_user`` to ``System`` which has full control over the operating system. * Grant ``SeTcbPrivilege`` to the user Ansible connects with on WinRM. ``SeTcbPrivilege`` is a high-level privilege that grants full control over the operating system. No user is given this privilege by default, and care should be taken if you grant this privilege to a user or group. For more information on this privilege, please see `Act as part of the operating system <https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn221957(v=ws.11)>`_. You can use the below task to set this privilege on a Windows host: .. code-block:: yaml - name: grant the ansible user the SeTcbPrivilege right win_user_right: name: SeTcbPrivilege users: '{{ansible_user}}' action: add * Turn UAC off on the host and reboot before trying to become the user. UAC is a security protocol that is designed to run accounts with the ``least privilege`` principle. You can turn UAC off by running the following tasks: .. code-block:: yaml - name: turn UAC off win_regedit: path: HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\system name: EnableLUA data: 0 type: dword state: present register: uac_result - name: reboot after disabling UAC win_reboot: when: uac_result is changed .. Note:: Granting the ``SeTcbPrivilege`` or turning UAC off can cause Windows security vulnerabilities and care should be given if these steps are taken. Local service accounts ---------------------- Prior to Ansible version 2.5, ``become`` only worked on Windows with a local or domain user account. Local service accounts like ``System`` or ``NetworkService`` could not be used as ``become_user`` in these older versions. This restriction has been lifted since the 2.5 release of Ansible. The three service accounts that can be set under ``become_user`` are: * System * NetworkService * LocalService Because local service accounts do not have passwords, the ``ansible_become_password`` parameter is not required and is ignored if specified. Become without setting a password --------------------------------- As of Ansible 2.8, ``become`` can be used to become a Windows local or domain account without requiring a password for that account. For this method to work, the following requirements must be met: * The connection user has the ``SeDebugPrivilege`` privilege assigned * The connection user is part of the ``BUILTIN\Administrators`` group * The ``become_user`` has either the ``SeBatchLogonRight`` or ``SeNetworkLogonRight`` user right Using become without a password is achieved in one of two different methods: * Duplicating an existing logon session's token if the account is already logged on * Using S4U to generate a logon token that is valid on the remote host only In the first scenario, the become process is spawned from another logon of that user account. This could be an existing RDP logon, console logon, but this is not guaranteed to occur all the time. This is similar to the ``Run only when user is logged on`` option for a Scheduled Task. In the case where another logon of the become account does not exist, S4U is used to create a new logon and run the module through that. This is similar to the ``Run whether user is logged on or not`` with the ``Do not store password`` option for a Scheduled Task. In this scenario, the become process will not be able to access any network resources like a normal WinRM process. To make a distinction between using become with no password and becoming an account that has no password make sure to keep ``ansible_become_password`` as undefined or set ``ansible_become_password:``. .. Note:: Because there are no guarantees an existing token will exist for a user when Ansible runs, there's a high change the become process will only have access to local resources. Use become with a password if the task needs to access network resources Accounts without a password --------------------------- .. Warning:: As a general security best practice, you should avoid allowing accounts without passwords. Ansible can be used to become a Windows account that does not have a password (like the ``Guest`` account). To become an account without a password, set up the variables like normal but set ``ansible_become_password: ''``. Before become can work on an account like this, the local policy `Accounts: Limit local account use of blank passwords to console logon only <https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj852174(v=ws.11)>`_ must be disabled. This can either be done through a Group Policy Object (GPO) or with this Ansible task: .. code-block:: yaml - name: allow blank password on become win_regedit: path: HKLM:\SYSTEM\CurrentControlSet\Control\Lsa name: LimitBlankPasswordUse data: 0 type: dword state: present .. Note:: This is only for accounts that do not have a password. You still need to set the account's password under ``ansible_become_password`` if the become_user has a password. Become flags for Windows ------------------------ Ansible 2.5 added the ``become_flags`` parameter to the ``runas`` become method. This parameter can be set using the ``become_flags`` task directive or set in Ansible's configuration using ``ansible_become_flags``. The two valid values that are initially supported for this parameter are ``logon_type`` and ``logon_flags``. .. Note:: These flags should only be set when becoming a normal user account, not a local service account like LocalSystem. The key ``logon_type`` sets the type of logon operation to perform. The value can be set to one of the following: * ``interactive``: The default logon type. The process will be run under a context that is the same as when running a process locally. This bypasses all WinRM restrictions and is the recommended method to use. * ``batch``: Runs the process under a batch context that is similar to a scheduled task with a password set. This should bypass most WinRM restrictions and is useful if the ``become_user`` is not allowed to log on interactively. * ``new_credentials``: Runs under the same credentials as the calling user, but outbound connections are run under the context of the ``become_user`` and ``become_password``, similar to ``runas.exe /netonly``. The ``logon_flags`` flag should also be set to ``netcredentials_only``. Use this flag if the process needs to access a network resource (like an SMB share) using a different set of credentials. * ``network``: Runs the process under a network context without any cached credentials. This results in the same type of logon session as running a normal WinRM process without credential delegation, and operates under the same restrictions. * ``network_cleartext``: Like the ``network`` logon type, but instead caches the credentials so it can access network resources. This is the same type of logon session as running a normal WinRM process with credential delegation. For more information, see `dwLogonType <https://docs.microsoft.com/en-gb/windows/desktop/api/winbase/nf-winbase-logonusera>`_. The ``logon_flags`` key specifies how Windows will log the user on when creating the new process. The value can be set to none or multiple of the following: * ``with_profile``: The default logon flag set. The process will load the user's profile in the ``HKEY_USERS`` registry key to ``HKEY_CURRENT_USER``. * ``netcredentials_only``: The process will use the same token as the caller but will use the ``become_user`` and ``become_password`` when accessing a remote resource. This is useful in inter-domain scenarios where there is no trust relationship, and should be used with the ``new_credentials`` ``logon_type``. By default ``logon_flags=with_profile`` is set, if the profile should not be loaded set ``logon_flags=`` or if the profile should be loaded with ``netcredentials_only``, set ``logon_flags=with_profile,netcredentials_only``. For more information, see `dwLogonFlags <https://docs.microsoft.com/en-gb/windows/desktop/api/winbase/nf-winbase-createprocesswithtokenw>`_. Here are some examples of how to use ``become_flags`` with Windows tasks: .. code-block:: yaml - name: copy a file from a fileshare with custom credentials win_copy: src: \\server\share\data\file.txt dest: C:\temp\file.txt remote_src: yes vars: ansible_become: yes ansible_become_method: runas ansible_become_user: DOMAIN\user ansible_become_password: Password01 ansible_become_flags: logon_type=new_credentials logon_flags=netcredentials_only - name: run a command under a batch logon win_whoami: become: yes become_flags: logon_type=batch - name: run a command and not load the user profile win_whomai: become: yes become_flags: logon_flags= Limitations of become on Windows -------------------------------- * Running a task with ``async`` and ``become`` on Windows Server 2008, 2008 R2 and Windows 7 only works when using Ansible 2.7 or newer. * By default, the become user logs on with an interactive session, so it must have the right to do so on the Windows host. If it does not inherit the ``SeAllowLogOnLocally`` privilege or inherits the ``SeDenyLogOnLocally`` privilege, the become process will fail. Either add the privilege or set the ``logon_type`` flag to change the logon type used. * Prior to Ansible version 2.3, become only worked when ``ansible_winrm_transport`` was either ``basic`` or ``credssp``. This restriction has been lifted since the 2.4 release of Ansible for all hosts except Windows Server 2008 (non R2 version). * The Secondary Logon service ``seclogon`` must be running to use ``ansible_become_method: runas`` .. seealso:: `Mailing List <https://groups.google.com/forum/#!forum/ansible-project>`_ Questions? Help? Ideas? Stop by the list on Google Groups `webchat.freenode.net <https://webchat.freenode.net>`_ #ansible IRC chat channel
closed
ansible/ansible
https://github.com/ansible/ansible
70,648
Use `chmod` instead of `setfacl` on macOS when becoming an unprivileged user
##### SUMMARY Support become unprivileged user on macOS without `allow_world_readable_tmpfiles=true` ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME action, shell, and copy ##### ADDITIONAL INFORMATION When using become where the login user and the become user are both unprivileged, Ansible attempts to securely change file permissions to only allow the become user to read the files. It uses `setfacl` and falls back to `chown`. Both of these fail on macOS, as there is no `setfacl` command and `chown` is not permitted. macOS actually does provide the same functionality as `setfacl`, using `chmod +a`. The syntax is a bit different from `setfacl`, but it should be able to do what Ansible needs.
https://github.com/ansible/ansible/issues/70648
https://github.com/ansible/ansible/pull/70785
79f7104556b3052e9e2d0c095ec2e4e0b1e61a92
0d7c144ce44cd40ffa7c109a027d0927961d6a63
2020-07-14T21:01:19Z
python
2020-08-04T18:32:48Z
lib/ansible/plugins/action/__init__.py
# coding: utf-8 # Copyright: (c) 2012-2014, Michael DeHaan <[email protected]> # Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import base64 import json import os import random import re import stat import tempfile import time from abc import ABCMeta, abstractmethod from ansible import constants as C from ansible.errors import AnsibleError, AnsibleConnectionFailure, AnsibleActionSkip, AnsibleActionFail, AnsiblePluginRemovedError from ansible.executor.module_common import modify_module from ansible.executor.interpreter_discovery import discover_interpreter, InterpreterDiscoveryRequiredError from ansible.module_utils.common._collections_compat import Sequence from ansible.module_utils.json_utils import _filter_non_json_lines from ansible.module_utils.six import binary_type, string_types, text_type, iteritems, with_metaclass from ansible.module_utils.six.moves import shlex_quote from ansible.module_utils._text import to_bytes, to_native, to_text from ansible.parsing.utils.jsonify import jsonify from ansible.release import __version__ from ansible.utils.collection_loader import resource_from_fqcr from ansible.utils.display import Display from ansible.utils.unsafe_proxy import wrap_var, AnsibleUnsafeText from ansible.vars.clean import remove_internal_keys display = Display() class ActionBase(with_metaclass(ABCMeta, object)): ''' This class is the base class for all action plugins, and defines code common to all actions. The base class handles the connection by putting/getting files and executing commands based on the current action in use. ''' # A set of valid arguments _VALID_ARGS = frozenset([]) def __init__(self, task, connection, play_context, loader, templar, shared_loader_obj): self._task = task self._connection = connection self._play_context = play_context self._loader = loader self._templar = templar self._shared_loader_obj = shared_loader_obj self._cleanup_remote_tmp = False self._supports_check_mode = True self._supports_async = False # interpreter discovery state self._discovered_interpreter_key = None self._discovered_interpreter = False self._discovery_deprecation_warnings = [] self._discovery_warnings = [] # Backwards compat: self._display isn't really needed, just import the global display and use that. self._display = display self._used_interpreter = None @abstractmethod def run(self, tmp=None, task_vars=None): """ Action Plugins should implement this method to perform their tasks. Everything else in this base class is a helper method for the action plugin to do that. :kwarg tmp: Deprecated parameter. This is no longer used. An action plugin that calls another one and wants to use the same remote tmp for both should set self._connection._shell.tmpdir rather than this parameter. :kwarg task_vars: The variables (host vars, group vars, config vars, etc) associated with this task. :returns: dictionary of results from the module Implementors of action modules may find the following variables especially useful: * Module parameters. These are stored in self._task.args """ result = {} if tmp is not None: result['warning'] = ['ActionModule.run() no longer honors the tmp parameter. Action' ' plugins should set self._connection._shell.tmpdir to share' ' the tmpdir'] del tmp if self._task.async_val and not self._supports_async: raise AnsibleActionFail('async is not supported for this task.') elif self._play_context.check_mode and not self._supports_check_mode: raise AnsibleActionSkip('check mode is not supported for this task.') elif self._task.async_val and self._play_context.check_mode: raise AnsibleActionFail('check mode and async cannot be used on same task.') # Error if invalid argument is passed if self._VALID_ARGS: task_opts = frozenset(self._task.args.keys()) bad_opts = task_opts.difference(self._VALID_ARGS) if bad_opts: raise AnsibleActionFail('Invalid options for %s: %s' % (self._task.action, ','.join(list(bad_opts)))) if self._connection._shell.tmpdir is None and self._early_needs_tmp_path(): self._make_tmp_path() return result def cleanup(self, force=False): """Method to perform a clean up at the end of an action plugin execution By default this is designed to clean up the shell tmpdir, and is toggled based on whether async is in use Action plugins may override this if they deem necessary, but should still call this method via super """ if force or not self._task.async_val: self._remove_tmp_path(self._connection._shell.tmpdir) def get_plugin_option(self, plugin, option, default=None): """Helper to get an option from a plugin without having to use the try/except dance everywhere to set a default """ try: return plugin.get_option(option) except (AttributeError, KeyError): return default def get_become_option(self, option, default=None): return self.get_plugin_option(self._connection.become, option, default=default) def get_connection_option(self, option, default=None): return self.get_plugin_option(self._connection, option, default=default) def get_shell_option(self, option, default=None): return self.get_plugin_option(self._connection._shell, option, default=default) def _remote_file_exists(self, path): cmd = self._connection._shell.exists(path) result = self._low_level_execute_command(cmd=cmd, sudoable=True) if result['rc'] == 0: return True return False def _configure_module(self, module_name, module_args, task_vars): ''' Handles the loading and templating of the module code through the modify_module() function. ''' if self._task.delegate_to: use_vars = task_vars.get('ansible_delegated_vars')[self._task.delegate_to] else: use_vars = task_vars split_module_name = module_name.split('.') collection_name = '.'.join(split_module_name[0:2]) if len(split_module_name) > 2 else '' leaf_module_name = resource_from_fqcr(module_name) # Search module path(s) for named module. for mod_type in self._connection.module_implementation_preferences: # Check to determine if PowerShell modules are supported, and apply # some fixes (hacks) to module name + args. if mod_type == '.ps1': # FIXME: This should be temporary and moved to an exec subsystem plugin where we can define the mapping # for each subsystem. win_collection = 'ansible.windows' rewrite_collection_names = ['ansible.builtin', 'ansible.legacy', ''] # async_status, win_stat, win_file, win_copy, and win_ping are not just like their # python counterparts but they are compatible enough for our # internal usage # NB: we only rewrite the module if it's not being called by the user (eg, an action calling something else) # and if it's unqualified or FQ to a builtin if leaf_module_name in ('stat', 'file', 'copy', 'ping') and \ collection_name in rewrite_collection_names and self._task.action != module_name: module_name = '%s.win_%s' % (win_collection, leaf_module_name) elif leaf_module_name == 'async_status' and collection_name in rewrite_collection_names: module_name = '%s.%s' % (win_collection, leaf_module_name) # TODO: move this tweak down to the modules, not extensible here # Remove extra quotes surrounding path parameters before sending to module. if leaf_module_name in ['win_stat', 'win_file', 'win_copy', 'slurp'] and module_args and \ hasattr(self._connection._shell, '_unquote'): for key in ('src', 'dest', 'path'): if key in module_args: module_args[key] = self._connection._shell._unquote(module_args[key]) result = self._shared_loader_obj.module_loader.find_plugin_with_context(module_name, mod_type, collection_list=self._task.collections) if not result.resolved: if result.redirect_list and len(result.redirect_list) > 1: # take the last one in the redirect list, we may have successfully jumped through N other redirects target_module_name = result.redirect_list[-1] raise AnsibleError("The module {0} was redirected to {1}, which could not be loaded.".format(module_name, target_module_name)) module_path = result.plugin_resolved_path if module_path: break else: # This is a for-else: http://bit.ly/1ElPkyg raise AnsibleError("The module %s was not found in configured module paths" % (module_name)) # insert shared code and arguments into the module final_environment = dict() self._compute_environment_string(final_environment) become_kwargs = {} if self._connection.become: become_kwargs['become'] = True become_kwargs['become_method'] = self._connection.become.name become_kwargs['become_user'] = self._connection.become.get_option('become_user', playcontext=self._play_context) become_kwargs['become_password'] = self._connection.become.get_option('become_pass', playcontext=self._play_context) become_kwargs['become_flags'] = self._connection.become.get_option('become_flags', playcontext=self._play_context) # modify_module will exit early if interpreter discovery is required; re-run after if necessary for dummy in (1, 2): try: (module_data, module_style, module_shebang) = modify_module(module_name, module_path, module_args, self._templar, task_vars=use_vars, module_compression=self._play_context.module_compression, async_timeout=self._task.async_val, environment=final_environment, **become_kwargs) break except InterpreterDiscoveryRequiredError as idre: self._discovered_interpreter = AnsibleUnsafeText(discover_interpreter( action=self, interpreter_name=idre.interpreter_name, discovery_mode=idre.discovery_mode, task_vars=use_vars)) # update the local task_vars with the discovered interpreter (which might be None); # we'll propagate back to the controller in the task result discovered_key = 'discovered_interpreter_%s' % idre.interpreter_name # update the local vars copy for the retry use_vars['ansible_facts'][discovered_key] = self._discovered_interpreter # TODO: this condition prevents 'wrong host' from being updated # but in future we would want to be able to update 'delegated host facts' # irrespective of task settings if not self._task.delegate_to or self._task.delegate_facts: # store in local task_vars facts collection for the retry and any other usages in this worker task_vars['ansible_facts'][discovered_key] = self._discovered_interpreter # preserve this so _execute_module can propagate back to controller as a fact self._discovered_interpreter_key = discovered_key else: task_vars['ansible_delegated_vars'][self._task.delegate_to]['ansible_facts'][discovered_key] = self._discovered_interpreter return (module_style, module_shebang, module_data, module_path) def _compute_environment_string(self, raw_environment_out=None): ''' Builds the environment string to be used when executing the remote task. ''' final_environment = dict() if self._task.environment is not None: environments = self._task.environment if not isinstance(environments, list): environments = [environments] # The order of environments matters to make sure we merge # in the parent's values first so those in the block then # task 'win' in precedence for environment in environments: if environment is None or len(environment) == 0: continue temp_environment = self._templar.template(environment) if not isinstance(temp_environment, dict): raise AnsibleError("environment must be a dictionary, received %s (%s)" % (temp_environment, type(temp_environment))) # very deliberately using update here instead of combine_vars, as # these environment settings should not need to merge sub-dicts final_environment.update(temp_environment) if len(final_environment) > 0: final_environment = self._templar.template(final_environment) if isinstance(raw_environment_out, dict): raw_environment_out.clear() raw_environment_out.update(final_environment) return self._connection._shell.env_prefix(**final_environment) def _early_needs_tmp_path(self): ''' Determines if a tmp path should be created before the action is executed. ''' return getattr(self, 'TRANSFERS_FILES', False) def _is_pipelining_enabled(self, module_style, wrap_async=False): ''' Determines if we are required and can do pipelining ''' try: is_enabled = self._connection.get_option('pipelining') except (KeyError, AttributeError, ValueError): is_enabled = self._play_context.pipelining # winrm supports async pipeline # TODO: make other class property 'has_async_pipelining' to separate cases always_pipeline = self._connection.always_pipeline_modules # su does not work with pipelining # TODO: add has_pipelining class prop to become plugins become_exception = (self._connection.become.name if self._connection.become else '') != 'su' # any of these require a true conditions = [ self._connection.has_pipelining, # connection class supports it is_enabled or always_pipeline, # enabled via config or forced via connection (eg winrm) module_style == "new", # old style modules do not support pipelining not C.DEFAULT_KEEP_REMOTE_FILES, # user wants remote files not wrap_async or always_pipeline, # async does not normally support pipelining unless it does (eg winrm) become_exception, ] return all(conditions) def _get_admin_users(self): ''' Returns a list of admin users that are configured for the current shell plugin ''' return self.get_shell_option('admin_users', ['root']) def _get_remote_user(self): ''' consistently get the 'remote_user' for the action plugin ''' # TODO: use 'current user running ansible' as fallback when moving away from play_context # pwd.getpwuid(os.getuid()).pw_name remote_user = None try: remote_user = self._connection.get_option('remote_user') except KeyError: # plugin does not have remote_user option, fallback to default and/play_context remote_user = getattr(self._connection, 'default_user', None) or self._play_context.remote_user except AttributeError: # plugin does not use config system, fallback to old play_context remote_user = self._play_context.remote_user return remote_user def _is_become_unprivileged(self): ''' The user is not the same as the connection user and is not part of the shell configured admin users ''' # if we don't use become then we know we aren't switching to a # different unprivileged user if not self._connection.become: return False # if we use become and the user is not an admin (or same user) then # we need to return become_unprivileged as True admin_users = self._get_admin_users() remote_user = self._get_remote_user() become_user = self.get_become_option('become_user') return bool(become_user and become_user not in admin_users + [remote_user]) def _make_tmp_path(self, remote_user=None): ''' Create and return a temporary path on a remote box. ''' # Network connection plugins (network_cli, netconf, etc.) execute on the controller, rather than the remote host. # As such, we want to avoid using remote_user for paths as remote_user may not line up with the local user # This is a hack and should be solved by more intelligent handling of remote_tmp in 2.7 if getattr(self._connection, '_remote_is_local', False): tmpdir = C.DEFAULT_LOCAL_TMP else: # NOTE: shell plugins should populate this setting anyways, but they dont do remote expansion, which # we need for 'non posix' systems like cloud-init and solaris tmpdir = self._remote_expand_user(self.get_shell_option('remote_tmp', default='~/.ansible/tmp'), sudoable=False) become_unprivileged = self._is_become_unprivileged() basefile = self._connection._shell._generate_temp_dir_name() cmd = self._connection._shell.mkdtemp(basefile=basefile, system=become_unprivileged, tmpdir=tmpdir) result = self._low_level_execute_command(cmd, sudoable=False) # error handling on this seems a little aggressive? if result['rc'] != 0: if result['rc'] == 5: output = 'Authentication failure.' elif result['rc'] == 255 and self._connection.transport in ('ssh',): if self._play_context.verbosity > 3: output = u'SSH encountered an unknown error. The output was:\n%s%s' % (result['stdout'], result['stderr']) else: output = (u'SSH encountered an unknown error during the connection. ' 'We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue') elif u'No space left on device' in result['stderr']: output = result['stderr'] else: output = ('Failed to create temporary directory.' 'In some cases, you may have been able to authenticate and did not have permissions on the target directory. ' 'Consider changing the remote tmp path in ansible.cfg to a path rooted in "/tmp", for more error information use -vvv. ' 'Failed command was: %s, exited with result %d' % (cmd, result['rc'])) if 'stdout' in result and result['stdout'] != u'': output = output + u", stdout output: %s" % result['stdout'] if self._play_context.verbosity > 3 and 'stderr' in result and result['stderr'] != u'': output += u", stderr output: %s" % result['stderr'] raise AnsibleConnectionFailure(output) else: self._cleanup_remote_tmp = True try: stdout_parts = result['stdout'].strip().split('%s=' % basefile, 1) rc = self._connection._shell.join_path(stdout_parts[-1], u'').splitlines()[-1] except IndexError: # stdout was empty or just space, set to / to trigger error in next if rc = '/' # Catch failure conditions, files should never be # written to locations in /. if rc == '/': raise AnsibleError('failed to resolve remote temporary directory from %s: `%s` returned empty string' % (basefile, cmd)) self._connection._shell.tmpdir = rc return rc def _should_remove_tmp_path(self, tmp_path): '''Determine if temporary path should be deleted or kept by user request/config''' return tmp_path and self._cleanup_remote_tmp and not C.DEFAULT_KEEP_REMOTE_FILES and "-tmp-" in tmp_path def _remove_tmp_path(self, tmp_path): '''Remove a temporary path we created. ''' if tmp_path is None and self._connection._shell.tmpdir: tmp_path = self._connection._shell.tmpdir if self._should_remove_tmp_path(tmp_path): cmd = self._connection._shell.remove(tmp_path, recurse=True) # If we have gotten here we have a working ssh configuration. # If ssh breaks we could leave tmp directories out on the remote system. tmp_rm_res = self._low_level_execute_command(cmd, sudoable=False) if tmp_rm_res.get('rc', 0) != 0: display.warning('Error deleting remote temporary files (rc: %s, stderr: %s})' % (tmp_rm_res.get('rc'), tmp_rm_res.get('stderr', 'No error string available.'))) else: self._connection._shell.tmpdir = None def _transfer_file(self, local_path, remote_path): """ Copy a file from the controller to a remote path :arg local_path: Path on controller to transfer :arg remote_path: Path on the remote system to transfer into .. warning:: * When you use this function you likely want to use use fixup_perms2() on the remote_path to make sure that the remote file is readable when the user becomes a non-privileged user. * If you use fixup_perms2() on the file and copy or move the file into place, you will need to then remove filesystem acls on the file once it has been copied into place by the module. See how the copy module implements this for help. """ self._connection.put_file(local_path, remote_path) return remote_path def _transfer_data(self, remote_path, data): ''' Copies the module data out to the temporary module path. ''' if isinstance(data, dict): data = jsonify(data) afd, afile = tempfile.mkstemp(dir=C.DEFAULT_LOCAL_TMP) afo = os.fdopen(afd, 'wb') try: data = to_bytes(data, errors='surrogate_or_strict') afo.write(data) except Exception as e: raise AnsibleError("failure writing module data to temporary file for transfer: %s" % to_native(e)) afo.flush() afo.close() try: self._transfer_file(afile, remote_path) finally: os.unlink(afile) return remote_path def _fixup_perms2(self, remote_paths, remote_user=None, execute=True): """ We need the files we upload to be readable (and sometimes executable) by the user being sudo'd to but we want to limit other people's access (because the files could contain passwords or other private information. We achieve this in one of these ways: * If no sudo is performed or the remote_user is sudo'ing to themselves, we don't have to change permissions. * If the remote_user sudo's to a privileged user (for instance, root), we don't have to change permissions * If the remote_user sudo's to an unprivileged user then we attempt to grant the unprivileged user access via file system acls. * If granting file system acls fails we try to change the owner of the file with chown which only works in case the remote_user is privileged or the remote systems allows chown calls by unprivileged users (e.g. HP-UX) * If the chown fails, we check if ansible_common_remote_group is set. If it is, we attempt to chgrp the file to its value. This is useful if the remote_user has a group in common with the become_user. As the remote_user, we can chgrp the file to that group and allow the become_user to read it. * If (the chown fails AND ansible_common_remote_group is not set) OR (ansible_common_remote_group is set AND the chgrp (or following chmod) returned non-zero), we can set the file to be world readable so that the second unprivileged user can read the file. Since this could allow other users to get access to private information we only do this if ansible is configured with "allow_world_readable_tmpfiles" in the ansible.cfg. Also note that when ansible_common_remote_group is set this final fallback is very unlikely to ever be triggered, so long as chgrp was successful. But just because the chgrp was successful, does not mean Ansible can necessarily access the files (if, for example, the variable was set to a group that remote_user is in, and can chgrp to, but does not have in common with become_user). """ if remote_user is None: remote_user = self._get_remote_user() # Step 1: Are we on windows? if getattr(self._connection._shell, "_IS_WINDOWS", False): # This won't work on Powershell as-is, so we'll just completely # skip until we have a need for it, at which point we'll have to do # something different. return remote_paths # Step 2: If we're not becoming an unprivileged user, we are roughly # done. Make the files +x if we're asked to, and return. if not self._is_become_unprivileged(): if execute: # Can't depend on the file being transferred with execute permissions. # Only need user perms because no become was used here res = self._remote_chmod(remote_paths, 'u+x') if res['rc'] != 0: raise AnsibleError( 'Failed to set execute bit on remote files ' '(rc: {0}, err: {1})'.format( res['rc'], to_native(res['stderr']))) return remote_paths # If we're still here, we have an unprivileged user that's different # than the ssh user. become_user = self.get_become_option('become_user') # Try to use file system acls to make the files readable for sudo'd # user if execute: chmod_mode = 'rx' setfacl_mode = 'r-x' else: chmod_mode = 'rX' # TODO: this form fails silently on freebsd. We currently # never call _fixup_perms2() with execute=False but if we # start to we'll have to fix this. setfacl_mode = 'r-X' # Step 3a: Are we able to use setfacl to add user ACLs to the file? res = self._remote_set_user_facl( remote_paths, become_user, setfacl_mode) if res['rc'] == 0: return remote_paths # Step 3b: Set execute if we need to. We do this before anything else # because some of the methods below might work but not let us set +x # as part of them. if execute: res = self._remote_chmod(remote_paths, 'u+x') if res['rc'] != 0: raise AnsibleError( 'Failed to set file mode on remote temporary files ' '(rc: {0}, err: {1})'.format( res['rc'], to_native(res['stderr']))) # Step 3c: File system ACLs failed above; try falling back to chown. res = self._remote_chown(remote_paths, become_user) if res['rc'] == 0: return remote_paths # Check if we are an admin/root user. If we are and got here, it means # we failed to chown as root and something weird has happened. if remote_user in self._get_admin_users(): raise AnsibleError( 'Failed to change ownership of the temporary files Ansible ' 'needs to create despite connecting as a privileged user. ' 'Unprivileged become user would be unable to read the ' 'file.') # Step 3d: Common group # Otherwise, we're a normal user. We failed to chown the paths to the # unprivileged user, but if we have a common group with them, we should # be able to chown it to that. # # Note that we have no way of knowing if this will actually work... just # because chgrp exits successfully does not mean that Ansible will work. # We could check if the become user is in the group, but this would # create an extra round trip. # # Also note that due to the above, this can prevent the # ALLOW_WORLD_READABLE_TMPFILES logic below from ever getting called. We # leave this up to the user to rectify if they have both of these # features enabled. group = self.get_shell_option('common_remote_group') if group is not None: res = self._remote_chgrp(remote_paths, group) if res['rc'] == 0: # If ALLOW_WORLD_READABLE_TMPFILES is set, we should warn the # user that something might go weirdly here. if C.ALLOW_WORLD_READABLE_TMPFILES: display.warning( 'Both common_remote_group and ' 'allow_world_readable_tmpfiles are set. chgrp was ' 'successful, but there is no guarantee that Ansible ' 'will be able to read the files after this operation, ' 'particularly if common_remote_group was set to a ' 'group of which the unprivileged become user is not a ' 'member. In this situation, ' 'allow_world_readable_tmpfiles is a no-op. See this ' 'URL for more details: ' 'https://docs.ansible.com/ansible/become.html' '#becoming-an-unprivileged-user') if execute: group_mode = 'g+rwx' else: group_mode = 'g+rw' res = self._remote_chmod(remote_paths, group_mode) if res['rc'] == 0: return remote_paths # Step 4: World-readable temp directory if self.get_shell_option( 'world_readable_temp', C.ALLOW_WORLD_READABLE_TMPFILES): # chown and fs acls failed -- do things this insecure way only if # the user opted in in the config file display.warning( 'Using world-readable permissions for temporary files Ansible ' 'needs to create when becoming an unprivileged user. This may ' 'be insecure. For information on securing this, see ' 'https://docs.ansible.com/ansible/user_guide/become.html' '#risks-of-becoming-an-unprivileged-user') res = self._remote_chmod(remote_paths, 'a+%s' % chmod_mode) if res['rc'] == 0: return remote_paths raise AnsibleError( 'Failed to set file mode on remote files ' '(rc: {0}, err: {1})'.format( res['rc'], to_native(res['stderr']))) raise AnsibleError( 'Failed to set permissions on the temporary files Ansible needs ' 'to create when becoming an unprivileged user ' '(rc: %s, err: %s}). For information on working around this, see ' 'https://docs.ansible.com/ansible/become.html' '#becoming-an-unprivileged-user' % ( res['rc'], to_native(res['stderr']))) def _remote_chmod(self, paths, mode, sudoable=False): ''' Issue a remote chmod command ''' cmd = self._connection._shell.chmod(paths, mode) res = self._low_level_execute_command(cmd, sudoable=sudoable) return res def _remote_chown(self, paths, user, sudoable=False): ''' Issue a remote chown command ''' cmd = self._connection._shell.chown(paths, user) res = self._low_level_execute_command(cmd, sudoable=sudoable) return res def _remote_chgrp(self, paths, group, sudoable=False): ''' Issue a remote chgrp command ''' cmd = self._connection._shell.chgrp(paths, group) res = self._low_level_execute_command(cmd, sudoable=sudoable) return res def _remote_set_user_facl(self, paths, user, mode, sudoable=False): ''' Issue a remote call to setfacl ''' cmd = self._connection._shell.set_user_facl(paths, user, mode) res = self._low_level_execute_command(cmd, sudoable=sudoable) return res def _execute_remote_stat(self, path, all_vars, follow, tmp=None, checksum=True): ''' Get information from remote file. ''' if tmp is not None: display.warning('_execute_remote_stat no longer honors the tmp parameter. Action' ' plugins should set self._connection._shell.tmpdir to share' ' the tmpdir') del tmp # No longer used module_args = dict( path=path, follow=follow, get_checksum=checksum, checksum_algorithm='sha1', ) mystat = self._execute_module(module_name='ansible.legacy.stat', module_args=module_args, task_vars=all_vars, wrap_async=False) if mystat.get('failed'): msg = mystat.get('module_stderr') if not msg: msg = mystat.get('module_stdout') if not msg: msg = mystat.get('msg') raise AnsibleError('Failed to get information on remote file (%s): %s' % (path, msg)) if not mystat['stat']['exists']: # empty might be matched, 1 should never match, also backwards compatible mystat['stat']['checksum'] = '1' # happens sometimes when it is a dir and not on bsd if 'checksum' not in mystat['stat']: mystat['stat']['checksum'] = '' elif not isinstance(mystat['stat']['checksum'], string_types): raise AnsibleError("Invalid checksum returned by stat: expected a string type but got %s" % type(mystat['stat']['checksum'])) return mystat['stat'] def _remote_checksum(self, path, all_vars, follow=False): ''' Produces a remote checksum given a path, Returns a number 0-4 for specific errors instead of checksum, also ensures it is different 0 = unknown error 1 = file does not exist, this might not be an error 2 = permissions issue 3 = its a directory, not a file 4 = stat module failed, likely due to not finding python 5 = appropriate json module not found ''' x = "0" # unknown error has occurred try: remote_stat = self._execute_remote_stat(path, all_vars, follow=follow) if remote_stat['exists'] and remote_stat['isdir']: x = "3" # its a directory not a file else: x = remote_stat['checksum'] # if 1, file is missing except AnsibleError as e: errormsg = to_text(e) if errormsg.endswith(u'Permission denied'): x = "2" # cannot read file elif errormsg.endswith(u'MODULE FAILURE'): x = "4" # python not found or module uncaught exception elif 'json' in errormsg: x = "5" # json module needed finally: return x # pylint: disable=lost-exception def _remote_expand_user(self, path, sudoable=True, pathsep=None): ''' takes a remote path and performs tilde/$HOME expansion on the remote host ''' # We only expand ~/path and ~username/path if not path.startswith('~'): return path # Per Jborean, we don't have to worry about Windows as we don't have a notion of user's home # dir there. split_path = path.split(os.path.sep, 1) expand_path = split_path[0] if expand_path == '~': # Network connection plugins (network_cli, netconf, etc.) execute on the controller, rather than the remote host. # As such, we want to avoid using remote_user for paths as remote_user may not line up with the local user # This is a hack and should be solved by more intelligent handling of remote_tmp in 2.7 become_user = self.get_become_option('become_user') if getattr(self._connection, '_remote_is_local', False): pass elif sudoable and self._connection.become and become_user: expand_path = '~%s' % become_user else: # use remote user instead, if none set default to current user expand_path = '~%s' % (self._get_remote_user() or '') # use shell to construct appropriate command and execute cmd = self._connection._shell.expand_user(expand_path) data = self._low_level_execute_command(cmd, sudoable=False) try: initial_fragment = data['stdout'].strip().splitlines()[-1] except IndexError: initial_fragment = None if not initial_fragment: # Something went wrong trying to expand the path remotely. Try using pwd, if not, return # the original string cmd = self._connection._shell.pwd() pwd = self._low_level_execute_command(cmd, sudoable=False).get('stdout', '').strip() if pwd: expanded = pwd else: expanded = path elif len(split_path) > 1: expanded = self._connection._shell.join_path(initial_fragment, *split_path[1:]) else: expanded = initial_fragment if '..' in os.path.dirname(expanded).split('/'): raise AnsibleError("'%s' returned an invalid relative home directory path containing '..'" % self._play_context.remote_addr) return expanded def _strip_success_message(self, data): ''' Removes the BECOME-SUCCESS message from the data. ''' if data.strip().startswith('BECOME-SUCCESS-'): data = re.sub(r'^((\r)?\n)?BECOME-SUCCESS.*(\r)?\n', '', data) return data def _update_module_args(self, module_name, module_args, task_vars): # set check mode in the module arguments, if required if self._play_context.check_mode: if not self._supports_check_mode: raise AnsibleError("check mode is not supported for this operation") module_args['_ansible_check_mode'] = True else: module_args['_ansible_check_mode'] = False # set no log in the module arguments, if required no_target_syslog = C.config.get_config_value('DEFAULT_NO_TARGET_SYSLOG', variables=task_vars) module_args['_ansible_no_log'] = self._play_context.no_log or no_target_syslog # set debug in the module arguments, if required module_args['_ansible_debug'] = C.DEFAULT_DEBUG # let module know we are in diff mode module_args['_ansible_diff'] = self._play_context.diff # let module know our verbosity module_args['_ansible_verbosity'] = display.verbosity # give the module information about the ansible version module_args['_ansible_version'] = __version__ # give the module information about its name module_args['_ansible_module_name'] = module_name # set the syslog facility to be used in the module module_args['_ansible_syslog_facility'] = task_vars.get('ansible_syslog_facility', C.DEFAULT_SYSLOG_FACILITY) # let module know about filesystems that selinux treats specially module_args['_ansible_selinux_special_fs'] = C.DEFAULT_SELINUX_SPECIAL_FS # what to do when parameter values are converted to strings module_args['_ansible_string_conversion_action'] = C.STRING_CONVERSION_ACTION # give the module the socket for persistent connections module_args['_ansible_socket'] = getattr(self._connection, 'socket_path') if not module_args['_ansible_socket']: module_args['_ansible_socket'] = task_vars.get('ansible_socket') # make sure all commands use the designated shell executable module_args['_ansible_shell_executable'] = self._play_context.executable # make sure modules are aware if they need to keep the remote files module_args['_ansible_keep_remote_files'] = C.DEFAULT_KEEP_REMOTE_FILES # make sure all commands use the designated temporary directory if created if self._is_become_unprivileged(): # force fallback on remote_tmp as user cannot normally write to dir module_args['_ansible_tmpdir'] = None else: module_args['_ansible_tmpdir'] = self._connection._shell.tmpdir # make sure the remote_tmp value is sent through in case modules needs to create their own module_args['_ansible_remote_tmp'] = self.get_shell_option('remote_tmp', default='~/.ansible/tmp') def _execute_module(self, module_name=None, module_args=None, tmp=None, task_vars=None, persist_files=False, delete_remote_tmp=None, wrap_async=False): ''' Transfer and run a module along with its arguments. ''' if tmp is not None: display.warning('_execute_module no longer honors the tmp parameter. Action plugins' ' should set self._connection._shell.tmpdir to share the tmpdir') del tmp # No longer used if delete_remote_tmp is not None: display.warning('_execute_module no longer honors the delete_remote_tmp parameter.' ' Action plugins should check self._connection._shell.tmpdir to' ' see if a tmpdir existed before they were called to determine' ' if they are responsible for removing it.') del delete_remote_tmp # No longer used tmpdir = self._connection._shell.tmpdir # We set the module_style to new here so the remote_tmp is created # before the module args are built if remote_tmp is needed (async). # If the module_style turns out to not be new and we didn't create the # remote tmp here, it will still be created. This must be done before # calling self._update_module_args() so the module wrapper has the # correct remote_tmp value set if not self._is_pipelining_enabled("new", wrap_async) and tmpdir is None: self._make_tmp_path() tmpdir = self._connection._shell.tmpdir if task_vars is None: task_vars = dict() # if a module name was not specified for this execution, use the action from the task if module_name is None: module_name = self._task.action if module_args is None: module_args = self._task.args self._update_module_args(module_name, module_args, task_vars) # FIXME: convert async_wrapper.py to not rely on environment variables # make sure we get the right async_dir variable, backwards compatibility # means we need to lookup the env value ANSIBLE_ASYNC_DIR first remove_async_dir = None if wrap_async or self._task.async_val: env_async_dir = [e for e in self._task.environment if "ANSIBLE_ASYNC_DIR" in e] if len(env_async_dir) > 0: msg = "Setting the async dir from the environment keyword " \ "ANSIBLE_ASYNC_DIR is deprecated. Set the async_dir " \ "shell option instead" self._display.deprecated(msg, "2.12", collection_name='ansible.builtin') else: # ANSIBLE_ASYNC_DIR is not set on the task, we get the value # from the shell option and temporarily add to the environment # list for async_wrapper to pick up async_dir = self.get_shell_option('async_dir', default="~/.ansible_async") remove_async_dir = len(self._task.environment) self._task.environment.append({"ANSIBLE_ASYNC_DIR": async_dir}) # FUTURE: refactor this along with module build process to better encapsulate "smart wrapper" functionality (module_style, shebang, module_data, module_path) = self._configure_module(module_name=module_name, module_args=module_args, task_vars=task_vars) display.vvv("Using module file %s" % module_path) if not shebang and module_style != 'binary': raise AnsibleError("module (%s) is missing interpreter line" % module_name) self._used_interpreter = shebang remote_module_path = None if not self._is_pipelining_enabled(module_style, wrap_async): # we might need remote tmp dir if tmpdir is None: self._make_tmp_path() tmpdir = self._connection._shell.tmpdir remote_module_filename = self._connection._shell.get_remote_filename(module_path) remote_module_path = self._connection._shell.join_path(tmpdir, 'AnsiballZ_%s' % remote_module_filename) args_file_path = None if module_style in ('old', 'non_native_want_json', 'binary'): # we'll also need a tmp file to hold our module arguments args_file_path = self._connection._shell.join_path(tmpdir, 'args') if remote_module_path or module_style != 'new': display.debug("transferring module to remote %s" % remote_module_path) if module_style == 'binary': self._transfer_file(module_path, remote_module_path) else: self._transfer_data(remote_module_path, module_data) if module_style == 'old': # we need to dump the module args to a k=v string in a file on # the remote system, which can be read and parsed by the module args_data = "" for k, v in iteritems(module_args): args_data += '%s=%s ' % (k, shlex_quote(text_type(v))) self._transfer_data(args_file_path, args_data) elif module_style in ('non_native_want_json', 'binary'): self._transfer_data(args_file_path, json.dumps(module_args)) display.debug("done transferring module to remote") environment_string = self._compute_environment_string() # remove the ANSIBLE_ASYNC_DIR env entry if we added a temporary one for # the async_wrapper task - this is so the async_status plugin doesn't # fire a deprecation warning when it runs after this task if remove_async_dir is not None: del self._task.environment[remove_async_dir] remote_files = [] if tmpdir and remote_module_path: remote_files = [tmpdir, remote_module_path] if args_file_path: remote_files.append(args_file_path) sudoable = True in_data = None cmd = "" if wrap_async and not self._connection.always_pipeline_modules: # configure, upload, and chmod the async_wrapper module (async_module_style, shebang, async_module_data, async_module_path) = self._configure_module( module_name='ansible.legacy.async_wrapper', module_args=dict(), task_vars=task_vars) async_module_remote_filename = self._connection._shell.get_remote_filename(async_module_path) remote_async_module_path = self._connection._shell.join_path(tmpdir, async_module_remote_filename) self._transfer_data(remote_async_module_path, async_module_data) remote_files.append(remote_async_module_path) async_limit = self._task.async_val async_jid = str(random.randint(0, 999999999999)) # call the interpreter for async_wrapper directly # this permits use of a script for an interpreter on non-Linux platforms # TODO: re-implement async_wrapper as a regular module to avoid this special case interpreter = shebang.replace('#!', '').strip() async_cmd = [interpreter, remote_async_module_path, async_jid, async_limit, remote_module_path] if environment_string: async_cmd.insert(0, environment_string) if args_file_path: async_cmd.append(args_file_path) else: # maintain a fixed number of positional parameters for async_wrapper async_cmd.append('_') if not self._should_remove_tmp_path(tmpdir): async_cmd.append("-preserve_tmp") cmd = " ".join(to_text(x) for x in async_cmd) else: if self._is_pipelining_enabled(module_style): in_data = module_data display.vvv("Pipelining is enabled.") else: cmd = remote_module_path cmd = self._connection._shell.build_module_command(environment_string, shebang, cmd, arg_path=args_file_path).strip() # Fix permissions of the tmpdir path and tmpdir files. This should be called after all # files have been transferred. if remote_files: # remove none/empty remote_files = [x for x in remote_files if x] self._fixup_perms2(remote_files, self._get_remote_user()) # actually execute res = self._low_level_execute_command(cmd, sudoable=sudoable, in_data=in_data) # parse the main result data = self._parse_returned_data(res) # NOTE: INTERNAL KEYS ONLY ACCESSIBLE HERE # get internal info before cleaning if data.pop("_ansible_suppress_tmpdir_delete", False): self._cleanup_remote_tmp = False # NOTE: yum returns results .. but that made it 'compatible' with squashing, so we allow mappings, for now if 'results' in data and (not isinstance(data['results'], Sequence) or isinstance(data['results'], string_types)): data['ansible_module_results'] = data['results'] del data['results'] display.warning("Found internal 'results' key in module return, renamed to 'ansible_module_results'.") # remove internal keys remove_internal_keys(data) if wrap_async: # async_wrapper will clean up its tmpdir on its own so we want the controller side to # forget about it now self._connection._shell.tmpdir = None # FIXME: for backwards compat, figure out if still makes sense data['changed'] = True # pre-split stdout/stderr into lines if needed if 'stdout' in data and 'stdout_lines' not in data: # if the value is 'False', a default won't catch it. txt = data.get('stdout', None) or u'' data['stdout_lines'] = txt.splitlines() if 'stderr' in data and 'stderr_lines' not in data: # if the value is 'False', a default won't catch it. txt = data.get('stderr', None) or u'' data['stderr_lines'] = txt.splitlines() # propagate interpreter discovery results back to the controller if self._discovered_interpreter_key: if data.get('ansible_facts') is None: data['ansible_facts'] = {} data['ansible_facts'][self._discovered_interpreter_key] = self._discovered_interpreter if self._discovery_warnings: if data.get('warnings') is None: data['warnings'] = [] data['warnings'].extend(self._discovery_warnings) if self._discovery_deprecation_warnings: if data.get('deprecations') is None: data['deprecations'] = [] data['deprecations'].extend(self._discovery_deprecation_warnings) # mark the entire module results untrusted as a template right here, since the current action could # possibly template one of these values. data = wrap_var(data) display.debug("done with _execute_module (%s, %s)" % (module_name, module_args)) return data def _parse_returned_data(self, res): try: filtered_output, warnings = _filter_non_json_lines(res.get('stdout', u'')) for w in warnings: display.warning(w) data = json.loads(filtered_output) data['_ansible_parsed'] = True except ValueError: # not valid json, lets try to capture error data = dict(failed=True, _ansible_parsed=False) data['module_stdout'] = res.get('stdout', u'') if 'stderr' in res: data['module_stderr'] = res['stderr'] if res['stderr'].startswith(u'Traceback'): data['exception'] = res['stderr'] # in some cases a traceback will arrive on stdout instead of stderr, such as when using ssh with -tt if 'exception' not in data and data['module_stdout'].startswith(u'Traceback'): data['exception'] = data['module_stdout'] # The default data['msg'] = "MODULE FAILURE" # try to figure out if we are missing interpreter if self._used_interpreter is not None: match = re.compile('%s: (?:No such file or directory|not found)' % self._used_interpreter.lstrip('!#')) if match.search(data['module_stderr']) or match.search(data['module_stdout']): data['msg'] = "The module failed to execute correctly, you probably need to set the interpreter." # always append hint data['msg'] += '\nSee stdout/stderr for the exact error' if 'rc' in res: data['rc'] = res['rc'] return data # FIXME: move to connection base def _low_level_execute_command(self, cmd, sudoable=True, in_data=None, executable=None, encoding_errors='surrogate_then_replace', chdir=None): ''' This is the function which executes the low level shell command, which may be commands to create/remove directories for temporary files, or to run the module code or python directly when pipelining. :kwarg encoding_errors: If the value returned by the command isn't utf-8 then we have to figure out how to transform it to unicode. If the value is just going to be displayed to the user (or discarded) then the default of 'replace' is fine. If the data is used as a key or is going to be written back out to a file verbatim, then this won't work. May have to use some sort of replacement strategy (python3 could use surrogateescape) :kwarg chdir: cd into this directory before executing the command. ''' display.debug("_low_level_execute_command(): starting") # if not cmd: # # this can happen with powershell modules when there is no analog to a Windows command (like chmod) # display.debug("_low_level_execute_command(): no command, exiting") # return dict(stdout='', stderr='', rc=254) if chdir: display.debug("_low_level_execute_command(): changing cwd to %s for this command" % chdir) cmd = self._connection._shell.append_command('cd %s' % chdir, cmd) # https://github.com/ansible/ansible/issues/68054 if executable: self._connection._shell.executable = executable ruser = self._get_remote_user() buser = self.get_become_option('become_user') if (sudoable and self._connection.become and # if sudoable and have become resource_from_fqcr(self._connection.transport) != 'network_cli' and # if not using network_cli (C.BECOME_ALLOW_SAME_USER or (buser != ruser or not any((ruser, buser))))): # if we allow same user PE or users are different and either is set display.debug("_low_level_execute_command(): using become for this command") cmd = self._connection.become.build_become_command(cmd, self._connection._shell) if self._connection.allow_executable: if executable is None: executable = self._play_context.executable # mitigation for SSH race which can drop stdout (https://github.com/ansible/ansible/issues/13876) # only applied for the default executable to avoid interfering with the raw action cmd = self._connection._shell.append_command(cmd, 'sleep 0') if executable: cmd = executable + ' -c ' + shlex_quote(cmd) display.debug("_low_level_execute_command(): executing: %s" % (cmd,)) # Change directory to basedir of task for command execution when connection is local if self._connection.transport == 'local': self._connection.cwd = to_bytes(self._loader.get_basedir(), errors='surrogate_or_strict') rc, stdout, stderr = self._connection.exec_command(cmd, in_data=in_data, sudoable=sudoable) # stdout and stderr may be either a file-like or a bytes object. # Convert either one to a text type if isinstance(stdout, binary_type): out = to_text(stdout, errors=encoding_errors) elif not isinstance(stdout, text_type): out = to_text(b''.join(stdout.readlines()), errors=encoding_errors) else: out = stdout if isinstance(stderr, binary_type): err = to_text(stderr, errors=encoding_errors) elif not isinstance(stderr, text_type): err = to_text(b''.join(stderr.readlines()), errors=encoding_errors) else: err = stderr if rc is None: rc = 0 # be sure to remove the BECOME-SUCCESS message now out = self._strip_success_message(out) display.debug(u"_low_level_execute_command() done: rc=%d, stdout=%s, stderr=%s" % (rc, out, err)) return dict(rc=rc, stdout=out, stdout_lines=out.splitlines(), stderr=err, stderr_lines=err.splitlines()) def _get_diff_data(self, destination, source, task_vars, source_file=True): # Note: Since we do not diff the source and destination before we transform from bytes into # text the diff between source and destination may not be accurate. To fix this, we'd need # to move the diffing from the callback plugins into here. # # Example of data which would cause trouble is src_content == b'\xff' and dest_content == # b'\xfe'. Neither of those are valid utf-8 so both get turned into the replacement # character: diff['before'] = u'�' ; diff['after'] = u'�' When the callback plugin later # diffs before and after it shows an empty diff. diff = {} display.debug("Going to peek to see if file has changed permissions") peek_result = self._execute_module( module_name='ansible.legacy.file', module_args=dict(path=destination, _diff_peek=True), task_vars=task_vars, persist_files=True) if peek_result.get('failed', False): display.warning(u"Failed to get diff between '%s' and '%s': %s" % (os.path.basename(source), destination, to_text(peek_result.get(u'msg', u'')))) return diff if peek_result.get('rc', 0) == 0: if peek_result.get('state') in (None, 'absent'): diff['before'] = u'' elif peek_result.get('appears_binary'): diff['dst_binary'] = 1 elif peek_result.get('size') and C.MAX_FILE_SIZE_FOR_DIFF > 0 and peek_result['size'] > C.MAX_FILE_SIZE_FOR_DIFF: diff['dst_larger'] = C.MAX_FILE_SIZE_FOR_DIFF else: display.debug(u"Slurping the file %s" % source) dest_result = self._execute_module( module_name='ansible.legacy.slurp', module_args=dict(path=destination), task_vars=task_vars, persist_files=True) if 'content' in dest_result: dest_contents = dest_result['content'] if dest_result['encoding'] == u'base64': dest_contents = base64.b64decode(dest_contents) else: raise AnsibleError("unknown encoding in content option, failed: %s" % to_native(dest_result)) diff['before_header'] = destination diff['before'] = to_text(dest_contents) if source_file: st = os.stat(source) if C.MAX_FILE_SIZE_FOR_DIFF > 0 and st[stat.ST_SIZE] > C.MAX_FILE_SIZE_FOR_DIFF: diff['src_larger'] = C.MAX_FILE_SIZE_FOR_DIFF else: display.debug("Reading local copy of the file %s" % source) try: with open(source, 'rb') as src: src_contents = src.read() except Exception as e: raise AnsibleError("Unexpected error while reading source (%s) for diff: %s " % (source, to_native(e))) if b"\x00" in src_contents: diff['src_binary'] = 1 else: diff['after_header'] = source diff['after'] = to_text(src_contents) else: display.debug(u"source of file passed in") diff['after_header'] = u'dynamically generated' diff['after'] = source if self._play_context.no_log: if 'before' in diff: diff["before"] = u"" if 'after' in diff: diff["after"] = u" [[ Diff output has been hidden because 'no_log: true' was specified for this result ]]\n" return diff def _find_needle(self, dirname, needle): ''' find a needle in haystack of paths, optionally using 'dirname' as a subdir. This will build the ordered list of paths to search and pass them to dwim to get back the first existing file found. ''' # dwim already deals with playbook basedirs path_stack = self._task.get_search_path() # if missing it will return a file not found exception return self._loader.path_dwim_relative_stack(path_stack, dirname, needle)
closed
ansible/ansible
https://github.com/ansible/ansible
70,648
Use `chmod` instead of `setfacl` on macOS when becoming an unprivileged user
##### SUMMARY Support become unprivileged user on macOS without `allow_world_readable_tmpfiles=true` ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME action, shell, and copy ##### ADDITIONAL INFORMATION When using become where the login user and the become user are both unprivileged, Ansible attempts to securely change file permissions to only allow the become user to read the files. It uses `setfacl` and falls back to `chown`. Both of these fail on macOS, as there is no `setfacl` command and `chown` is not permitted. macOS actually does provide the same functionality as `setfacl`, using `chmod +a`. The syntax is a bit different from `setfacl`, but it should be able to do what Ansible needs.
https://github.com/ansible/ansible/issues/70648
https://github.com/ansible/ansible/pull/70785
79f7104556b3052e9e2d0c095ec2e4e0b1e61a92
0d7c144ce44cd40ffa7c109a027d0927961d6a63
2020-07-14T21:01:19Z
python
2020-08-04T18:32:48Z
test/integration/targets/become_unprivileged/aliases
destructive shippable/posix/group1 skip/aix needs/ssh
closed
ansible/ansible
https://github.com/ansible/ansible
70,648
Use `chmod` instead of `setfacl` on macOS when becoming an unprivileged user
##### SUMMARY Support become unprivileged user on macOS without `allow_world_readable_tmpfiles=true` ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME action, shell, and copy ##### ADDITIONAL INFORMATION When using become where the login user and the become user are both unprivileged, Ansible attempts to securely change file permissions to only allow the become user to read the files. It uses `setfacl` and falls back to `chown`. Both of these fail on macOS, as there is no `setfacl` command and `chown` is not permitted. macOS actually does provide the same functionality as `setfacl`, using `chmod +a`. The syntax is a bit different from `setfacl`, but it should be able to do what Ansible needs.
https://github.com/ansible/ansible/issues/70648
https://github.com/ansible/ansible/pull/70785
79f7104556b3052e9e2d0c095ec2e4e0b1e61a92
0d7c144ce44cd40ffa7c109a027d0927961d6a63
2020-07-14T21:01:19Z
python
2020-08-04T18:32:48Z
test/integration/targets/become_unprivileged/chmod_acl_macos/test.yml
closed
ansible/ansible
https://github.com/ansible/ansible
70,648
Use `chmod` instead of `setfacl` on macOS when becoming an unprivileged user
##### SUMMARY Support become unprivileged user on macOS without `allow_world_readable_tmpfiles=true` ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME action, shell, and copy ##### ADDITIONAL INFORMATION When using become where the login user and the become user are both unprivileged, Ansible attempts to securely change file permissions to only allow the become user to read the files. It uses `setfacl` and falls back to `chown`. Both of these fail on macOS, as there is no `setfacl` command and `chown` is not permitted. macOS actually does provide the same functionality as `setfacl`, using `chmod +a`. The syntax is a bit different from `setfacl`, but it should be able to do what Ansible needs.
https://github.com/ansible/ansible/issues/70648
https://github.com/ansible/ansible/pull/70785
79f7104556b3052e9e2d0c095ec2e4e0b1e61a92
0d7c144ce44cd40ffa7c109a027d0927961d6a63
2020-07-14T21:01:19Z
python
2020-08-04T18:32:48Z
test/integration/targets/become_unprivileged/runme.sh
#!/usr/bin/env bash set -eux begin_sandwich() { ansible-playbook setup_unpriv_users.yml -i inventory -v "$@" } end_sandwich() { unset ANSIBLE_KEEP_REMOTE_FILES unset ANSIBLE_COMMON_REMOTE_GROUP unset ANSIBLE_BECOME_PASS # Do a few cleanup tasks (nuke users, groups, and homedirs, undo config changes) ansible-playbook cleanup_unpriv_users.yml -i inventory -v "$@" # We do these last since they do things like remove groups and will error # if there are still users in them. for pb in */cleanup.yml; do ansible-playbook "$pb" -i inventory -v "$@" done } trap "end_sandwich \"\$@\"" EXIT # Common group tests begin_sandwich "$@" ansible-playbook common_remote_group/setup.yml -i inventory -v "$@" export ANSIBLE_KEEP_REMOTE_FILES=True export ANSIBLE_COMMON_REMOTE_GROUP=commongroup export ANSIBLE_BECOME_PASS='iWishIWereCoolEnoughForRoot!' ANSIBLE_ACTION_PLUGINS="$(pwd)/action_plugins" export ANSIBLE_ACTION_PLUGINS ansible-playbook common_remote_group/test.yml -i inventory -v "$@" end_sandwich "$@"
closed
ansible/ansible
https://github.com/ansible/ansible
70,648
Use `chmod` instead of `setfacl` on macOS when becoming an unprivileged user
##### SUMMARY Support become unprivileged user on macOS without `allow_world_readable_tmpfiles=true` ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME action, shell, and copy ##### ADDITIONAL INFORMATION When using become where the login user and the become user are both unprivileged, Ansible attempts to securely change file permissions to only allow the become user to read the files. It uses `setfacl` and falls back to `chown`. Both of these fail on macOS, as there is no `setfacl` command and `chown` is not permitted. macOS actually does provide the same functionality as `setfacl`, using `chmod +a`. The syntax is a bit different from `setfacl`, but it should be able to do what Ansible needs.
https://github.com/ansible/ansible/issues/70648
https://github.com/ansible/ansible/pull/70785
79f7104556b3052e9e2d0c095ec2e4e0b1e61a92
0d7c144ce44cd40ffa7c109a027d0927961d6a63
2020-07-14T21:01:19Z
python
2020-08-04T18:32:48Z
test/units/plugins/action/test_action.py
# -*- coding: utf-8 -*- # (c) 2015, Florian Apolloner <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import os import re from ansible import constants as C from units.compat import unittest from units.compat.mock import patch, MagicMock, mock_open from ansible.errors import AnsibleError from ansible.module_utils.six import text_type from ansible.module_utils.six.moves import shlex_quote, builtins from ansible.module_utils._text import to_bytes from ansible.playbook.play_context import PlayContext from ansible.plugins.action import ActionBase from ansible.template import Templar from ansible.vars.clean import clean_facts from units.mock.loader import DictDataLoader python_module_replacers = br""" #!/usr/bin/python #ANSIBLE_VERSION = "<<ANSIBLE_VERSION>>" #MODULE_COMPLEX_ARGS = "<<INCLUDE_ANSIBLE_MODULE_COMPLEX_ARGS>>" #SELINUX_SPECIAL_FS="<<SELINUX_SPECIAL_FILESYSTEMS>>" test = u'Toshio \u304f\u3089\u3068\u307f' from ansible.module_utils.basic import * """ powershell_module_replacers = b""" WINDOWS_ARGS = "<<INCLUDE_ANSIBLE_MODULE_JSON_ARGS>>" # POWERSHELL_COMMON """ def _action_base(): fake_loader = DictDataLoader({ }) mock_module_loader = MagicMock() mock_shared_loader_obj = MagicMock() mock_shared_loader_obj.module_loader = mock_module_loader mock_connection_loader = MagicMock() mock_shared_loader_obj.connection_loader = mock_connection_loader mock_connection = MagicMock() play_context = MagicMock() action_base = DerivedActionBase(task=None, connection=mock_connection, play_context=play_context, loader=fake_loader, templar=None, shared_loader_obj=mock_shared_loader_obj) return action_base class DerivedActionBase(ActionBase): TRANSFERS_FILES = False def run(self, tmp=None, task_vars=None): # We're not testing the plugin run() method, just the helper # methods ActionBase defines return super(DerivedActionBase, self).run(tmp=tmp, task_vars=task_vars) class TestActionBase(unittest.TestCase): def test_action_base_run(self): mock_task = MagicMock() mock_task.action = "foo" mock_task.args = dict(a=1, b=2, c=3) mock_connection = MagicMock() play_context = PlayContext() mock_task.async_val = None action_base = DerivedActionBase(mock_task, mock_connection, play_context, None, None, None) results = action_base.run() self.assertEqual(results, dict()) mock_task.async_val = 0 action_base = DerivedActionBase(mock_task, mock_connection, play_context, None, None, None) results = action_base.run() self.assertEqual(results, {}) def test_action_base__configure_module(self): fake_loader = DictDataLoader({ }) # create our fake task mock_task = MagicMock() mock_task.action = "copy" mock_task.async_val = 0 mock_task.delegate_to = None # create a mock connection, so we don't actually try and connect to things mock_connection = MagicMock() # create a mock shared loader object def mock_find_plugin_with_context(name, options, collection_list=None): mockctx = MagicMock() if name == 'badmodule': mockctx.resolved = False mockctx.plugin_resolved_path = None elif '.ps1' in options: mockctx.resolved = True mockctx.plugin_resolved_path = '/fake/path/to/%s.ps1' % name else: mockctx.resolved = True mockctx.plugin_resolved_path = '/fake/path/to/%s' % name return mockctx mock_module_loader = MagicMock() mock_module_loader.find_plugin_with_context.side_effect = mock_find_plugin_with_context mock_shared_obj_loader = MagicMock() mock_shared_obj_loader.module_loader = mock_module_loader # we're using a real play context here play_context = PlayContext() # our test class action_base = DerivedActionBase( task=mock_task, connection=mock_connection, play_context=play_context, loader=fake_loader, templar=Templar(loader=fake_loader), shared_loader_obj=mock_shared_obj_loader, ) # test python module formatting with patch.object(builtins, 'open', mock_open(read_data=to_bytes(python_module_replacers.strip(), encoding='utf-8'))): with patch.object(os, 'rename'): mock_task.args = dict(a=1, foo='fö〩') mock_connection.module_implementation_preferences = ('',) (style, shebang, data, path) = action_base._configure_module(mock_task.action, mock_task.args, task_vars=dict(ansible_python_interpreter='/usr/bin/python')) self.assertEqual(style, "new") self.assertEqual(shebang, u"#!/usr/bin/python") # test module not found self.assertRaises(AnsibleError, action_base._configure_module, 'badmodule', mock_task.args, {}) # test powershell module formatting with patch.object(builtins, 'open', mock_open(read_data=to_bytes(powershell_module_replacers.strip(), encoding='utf-8'))): mock_task.action = 'win_copy' mock_task.args = dict(b=2) mock_connection.module_implementation_preferences = ('.ps1',) (style, shebang, data, path) = action_base._configure_module('stat', mock_task.args, {}) self.assertEqual(style, "new") self.assertEqual(shebang, u'#!powershell') # test module not found self.assertRaises(AnsibleError, action_base._configure_module, 'badmodule', mock_task.args, {}) def test_action_base__compute_environment_string(self): fake_loader = DictDataLoader({ }) # create our fake task mock_task = MagicMock() mock_task.action = "copy" mock_task.args = dict(a=1) # create a mock connection, so we don't actually try and connect to things def env_prefix(**args): return ' '.join(['%s=%s' % (k, shlex_quote(text_type(v))) for k, v in args.items()]) mock_connection = MagicMock() mock_connection._shell.env_prefix.side_effect = env_prefix # we're using a real play context here play_context = PlayContext() # and we're using a real templar here too templar = Templar(loader=fake_loader) # our test class action_base = DerivedActionBase( task=mock_task, connection=mock_connection, play_context=play_context, loader=fake_loader, templar=templar, shared_loader_obj=None, ) # test standard environment setup mock_task.environment = [dict(FOO='foo'), None] env_string = action_base._compute_environment_string() self.assertEqual(env_string, "FOO=foo") # test where environment is not a list mock_task.environment = dict(FOO='foo') env_string = action_base._compute_environment_string() self.assertEqual(env_string, "FOO=foo") # test environment with a variable in it templar.available_variables = dict(the_var='bar') mock_task.environment = [dict(FOO='{{the_var}}')] env_string = action_base._compute_environment_string() self.assertEqual(env_string, "FOO=bar") # test with a bad environment set mock_task.environment = dict(FOO='foo') mock_task.environment = ['hi there'] self.assertRaises(AnsibleError, action_base._compute_environment_string) def test_action_base__early_needs_tmp_path(self): # create our fake task mock_task = MagicMock() # create a mock connection, so we don't actually try and connect to things mock_connection = MagicMock() # we're using a real play context here play_context = PlayContext() # our test class action_base = DerivedActionBase( task=mock_task, connection=mock_connection, play_context=play_context, loader=None, templar=None, shared_loader_obj=None, ) self.assertFalse(action_base._early_needs_tmp_path()) action_base.TRANSFERS_FILES = True self.assertTrue(action_base._early_needs_tmp_path()) def test_action_base__make_tmp_path(self): # create our fake task mock_task = MagicMock() def get_shell_opt(opt): ret = None if opt == 'admin_users': ret = ['root', 'toor', 'Administrator'] elif opt == 'remote_tmp': ret = '~/.ansible/tmp' return ret # create a mock connection, so we don't actually try and connect to things mock_connection = MagicMock() mock_connection.transport = 'ssh' mock_connection._shell.mkdtemp.return_value = 'mkdir command' mock_connection._shell.join_path.side_effect = os.path.join mock_connection._shell.get_option = get_shell_opt mock_connection._shell.HOMES_RE = re.compile(r'(\'|\")?(~|\$HOME)(.*)') # we're using a real play context here play_context = PlayContext() play_context.become = True play_context.become_user = 'foo' # our test class action_base = DerivedActionBase( task=mock_task, connection=mock_connection, play_context=play_context, loader=None, templar=None, shared_loader_obj=None, ) action_base._low_level_execute_command = MagicMock() action_base._low_level_execute_command.return_value = dict(rc=0, stdout='/some/path') self.assertEqual(action_base._make_tmp_path('root'), '/some/path/') # empty path fails action_base._low_level_execute_command.return_value = dict(rc=0, stdout='') self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root') # authentication failure action_base._low_level_execute_command.return_value = dict(rc=5, stdout='') self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root') # ssh error action_base._low_level_execute_command.return_value = dict(rc=255, stdout='', stderr='') self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root') play_context.verbosity = 5 self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root') # general error action_base._low_level_execute_command.return_value = dict(rc=1, stdout='some stuff here', stderr='') self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root') action_base._low_level_execute_command.return_value = dict(rc=1, stdout='some stuff here', stderr='No space left on device') self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root') def test_action_base__fixup_perms2(self): mock_task = MagicMock() mock_connection = MagicMock() play_context = PlayContext() action_base = DerivedActionBase( task=mock_task, connection=mock_connection, play_context=play_context, loader=None, templar=None, shared_loader_obj=None, ) action_base._low_level_execute_command = MagicMock() remote_paths = ['/tmp/foo/bar.txt', '/tmp/baz.txt'] remote_user = 'remoteuser1' def runWithNoExpectation(execute=False): return action_base._fixup_perms2( remote_paths, remote_user=remote_user, execute=execute) def assertSuccess(execute=False): self.assertEqual(runWithNoExpectation(execute), remote_paths) def assertThrowRegex(regex, execute=False): self.assertRaisesRegexp( AnsibleError, regex, action_base._fixup_perms2, remote_paths, remote_user=remote_user, execute=execute) def get_shell_option_for_arg(args_kv, default): '''A helper for get_shell_option. Returns a function that, if called with ``option`` that exists in args_kv, will return the value, else will return ``default`` for every other given arg''' def _helper(option, *args, **kwargs): return args_kv.get(option, default) return _helper # Step 1: On windows, we just return remote_paths action_base._connection._shell._IS_WINDOWS = True assertSuccess(execute=False) assertSuccess(execute=True) # But if we're not on windows....we have more work to do. action_base._connection._shell._IS_WINDOWS = False # Step 2: We're /not/ becoming an unprivileged user action_base._remote_chmod = MagicMock() action_base._is_become_unprivileged = MagicMock() action_base._is_become_unprivileged.return_value = False # Two subcases: # - _remote_chmod rc is 0 # - _remote-chmod rc is not 0, something failed action_base._remote_chmod.return_value = { 'rc': 0, 'stdout': 'some stuff here', 'stderr': '', } assertSuccess(execute=True) # When execute=False, we just get the list back. But add it here for # completion. chmod is never called. assertSuccess() action_base._remote_chmod.return_value = { 'rc': 1, 'stdout': 'some stuff here', 'stderr': 'and here', } assertThrowRegex( 'Failed to set execute bit on remote files', execute=True) # Step 3: we are becoming unprivileged action_base._is_become_unprivileged.return_value = True # Step 3a: setfacl action_base._remote_set_user_facl = MagicMock() action_base._remote_set_user_facl.return_value = { 'rc': 0, 'stdout': '', 'stderr': '', } assertSuccess() # Step 3b: chmod +x if we need to # To get here, setfacl failed, so mock it as such. action_base._remote_set_user_facl.return_value = { 'rc': 1, 'stdout': '', 'stderr': '', } action_base._remote_chmod.return_value = { 'rc': 0, 'stdout': 'some stuff here', 'stderr': '', } assertSuccess(execute=True) action_base._remote_chmod.return_value = { 'rc': 1, 'stdout': 'some stuff here', 'stderr': '', } assertThrowRegex( 'Failed to set file mode on remote temporary file', execute=True) # Step 3c: chown action_base._remote_chown = MagicMock() action_base._remote_chown.return_value = { 'rc': 0, 'stdout': '', 'stderr': '', } assertSuccess() action_base._remote_chown.return_value = { 'rc': 1, 'stdout': '', 'stderr': '', } remote_user = 'root' action_base._get_admin_users = MagicMock() action_base._get_admin_users.return_value = ['root'] assertThrowRegex('user would be unable to read the file.') remote_user = 'remoteuser1' # Step 3d: Common group get_shell_option = action_base.get_shell_option action_base.get_shell_option = MagicMock() action_base.get_shell_option.side_effect = get_shell_option_for_arg( { 'common_remote_group': 'commongroup', }, None) action_base._remote_chgrp = MagicMock() action_base._remote_chgrp.return_value = { 'rc': 0, 'stdout': '', 'stderr': '', } # TODO: Add test to assert warning is shown if # ALLOW_WORLD_READABLE_TMPFILES is set in this case. action_base._remote_chmod.return_value = { 'rc': 0, 'stdout': '', 'stderr': '', } assertSuccess() action_base._remote_chgrp.assert_called_once_with( remote_paths, 'commongroup') # Step 4: world-readable tmpdir action_base.get_shell_option.side_effect = get_shell_option_for_arg( { 'world_readable_temp': True, 'common_remote_group': None, }, None) action_base._remote_chmod.return_value = { 'rc': 0, 'stdout': 'some stuff here', 'stderr': '', } assertSuccess() action_base._remote_chmod.return_value = { 'rc': 1, 'stdout': 'some stuff here', 'stderr': '', } assertThrowRegex('Failed to set file mode on remote files') # Otherwise if we make it here in this state, we hit the catch-all action_base.get_shell_option.side_effect = get_shell_option_for_arg( {}, None) assertThrowRegex('on the temporary files Ansible needs to create') def test_action_base__remove_tmp_path(self): # create our fake task mock_task = MagicMock() # create a mock connection, so we don't actually try and connect to things mock_connection = MagicMock() mock_connection._shell.remove.return_value = 'rm some stuff' # we're using a real play context here play_context = PlayContext() # our test class action_base = DerivedActionBase( task=mock_task, connection=mock_connection, play_context=play_context, loader=None, templar=None, shared_loader_obj=None, ) action_base._low_level_execute_command = MagicMock() # these don't really return anything or raise errors, so # we're pretty much calling these for coverage right now action_base._remove_tmp_path('/bad/path/dont/remove') action_base._remove_tmp_path('/good/path/to/ansible-tmp-thing') @patch('os.unlink') @patch('os.fdopen') @patch('tempfile.mkstemp') def test_action_base__transfer_data(self, mock_mkstemp, mock_fdopen, mock_unlink): # create our fake task mock_task = MagicMock() # create a mock connection, so we don't actually try and connect to things mock_connection = MagicMock() mock_connection.put_file.return_value = None # we're using a real play context here play_context = PlayContext() # our test class action_base = DerivedActionBase( task=mock_task, connection=mock_connection, play_context=play_context, loader=None, templar=None, shared_loader_obj=None, ) mock_afd = MagicMock() mock_afile = MagicMock() mock_mkstemp.return_value = (mock_afd, mock_afile) mock_unlink.return_value = None mock_afo = MagicMock() mock_afo.write.return_value = None mock_afo.flush.return_value = None mock_afo.close.return_value = None mock_fdopen.return_value = mock_afo self.assertEqual(action_base._transfer_data('/path/to/remote/file', 'some data'), '/path/to/remote/file') self.assertEqual(action_base._transfer_data('/path/to/remote/file', 'some mixed data: fö〩'), '/path/to/remote/file') self.assertEqual(action_base._transfer_data('/path/to/remote/file', dict(some_key='some value')), '/path/to/remote/file') self.assertEqual(action_base._transfer_data('/path/to/remote/file', dict(some_key='fö〩')), '/path/to/remote/file') mock_afo.write.side_effect = Exception() self.assertRaises(AnsibleError, action_base._transfer_data, '/path/to/remote/file', '') def test_action_base__execute_remote_stat(self): # create our fake task mock_task = MagicMock() # create a mock connection, so we don't actually try and connect to things mock_connection = MagicMock() # we're using a real play context here play_context = PlayContext() # our test class action_base = DerivedActionBase( task=mock_task, connection=mock_connection, play_context=play_context, loader=None, templar=None, shared_loader_obj=None, ) action_base._execute_module = MagicMock() # test normal case action_base._execute_module.return_value = dict(stat=dict(checksum='1111111111111111111111111111111111', exists=True)) res = action_base._execute_remote_stat(path='/path/to/file', all_vars=dict(), follow=False) self.assertEqual(res['checksum'], '1111111111111111111111111111111111') # test does not exist action_base._execute_module.return_value = dict(stat=dict(exists=False)) res = action_base._execute_remote_stat(path='/path/to/file', all_vars=dict(), follow=False) self.assertFalse(res['exists']) self.assertEqual(res['checksum'], '1') # test no checksum in result from _execute_module action_base._execute_module.return_value = dict(stat=dict(exists=True)) res = action_base._execute_remote_stat(path='/path/to/file', all_vars=dict(), follow=False) self.assertTrue(res['exists']) self.assertEqual(res['checksum'], '') # test stat call failed action_base._execute_module.return_value = dict(failed=True, msg="because I said so") self.assertRaises(AnsibleError, action_base._execute_remote_stat, path='/path/to/file', all_vars=dict(), follow=False) def test_action_base__execute_module(self): # create our fake task mock_task = MagicMock() mock_task.action = 'copy' mock_task.args = dict(a=1, b=2, c=3) # create a mock connection, so we don't actually try and connect to things def build_module_command(env_string, shebang, cmd, arg_path=None): to_run = [env_string, cmd] if arg_path: to_run.append(arg_path) return " ".join(to_run) def get_option(option): return {'admin_users': ['root', 'toor']}.get(option) mock_connection = MagicMock() mock_connection.build_module_command.side_effect = build_module_command mock_connection.socket_path = None mock_connection._shell.get_remote_filename.return_value = 'copy.py' mock_connection._shell.join_path.side_effect = os.path.join mock_connection._shell.tmpdir = '/var/tmp/mytempdir' mock_connection._shell.get_option = get_option # we're using a real play context here play_context = PlayContext() # our test class action_base = DerivedActionBase( task=mock_task, connection=mock_connection, play_context=play_context, loader=None, templar=None, shared_loader_obj=None, ) # fake a lot of methods as we test those elsewhere action_base._configure_module = MagicMock() action_base._supports_check_mode = MagicMock() action_base._is_pipelining_enabled = MagicMock() action_base._make_tmp_path = MagicMock() action_base._transfer_data = MagicMock() action_base._compute_environment_string = MagicMock() action_base._low_level_execute_command = MagicMock() action_base._fixup_perms2 = MagicMock() action_base._configure_module.return_value = ('new', '#!/usr/bin/python', 'this is the module data', 'path') action_base._is_pipelining_enabled.return_value = False action_base._compute_environment_string.return_value = '' action_base._connection.has_pipelining = False action_base._make_tmp_path.return_value = '/the/tmp/path' action_base._low_level_execute_command.return_value = dict(stdout='{"rc": 0, "stdout": "ok"}') self.assertEqual(action_base._execute_module(module_name=None, module_args=None), dict(_ansible_parsed=True, rc=0, stdout="ok", stdout_lines=['ok'])) self.assertEqual( action_base._execute_module( module_name='foo', module_args=dict(z=9, y=8, x=7), task_vars=dict(a=1) ), dict( _ansible_parsed=True, rc=0, stdout="ok", stdout_lines=['ok'], ) ) # test with needing/removing a remote tmp path action_base._configure_module.return_value = ('old', '#!/usr/bin/python', 'this is the module data', 'path') action_base._is_pipelining_enabled.return_value = False action_base._make_tmp_path.return_value = '/the/tmp/path' self.assertEqual(action_base._execute_module(), dict(_ansible_parsed=True, rc=0, stdout="ok", stdout_lines=['ok'])) action_base._configure_module.return_value = ('non_native_want_json', '#!/usr/bin/python', 'this is the module data', 'path') self.assertEqual(action_base._execute_module(), dict(_ansible_parsed=True, rc=0, stdout="ok", stdout_lines=['ok'])) play_context.become = True play_context.become_user = 'foo' self.assertEqual(action_base._execute_module(), dict(_ansible_parsed=True, rc=0, stdout="ok", stdout_lines=['ok'])) # test an invalid shebang return action_base._configure_module.return_value = ('new', '', 'this is the module data', 'path') action_base._is_pipelining_enabled.return_value = False action_base._make_tmp_path.return_value = '/the/tmp/path' self.assertRaises(AnsibleError, action_base._execute_module) # test with check mode enabled, once with support for check # mode and once with support disabled to raise an error play_context.check_mode = True action_base._configure_module.return_value = ('new', '#!/usr/bin/python', 'this is the module data', 'path') self.assertEqual(action_base._execute_module(), dict(_ansible_parsed=True, rc=0, stdout="ok", stdout_lines=['ok'])) action_base._supports_check_mode = False self.assertRaises(AnsibleError, action_base._execute_module) def test_action_base_sudo_only_if_user_differs(self): fake_loader = MagicMock() fake_loader.get_basedir.return_value = os.getcwd() play_context = PlayContext() action_base = DerivedActionBase(None, None, play_context, fake_loader, None, None) action_base.get_become_option = MagicMock(return_value='root') action_base._get_remote_user = MagicMock(return_value='root') action_base._connection = MagicMock(exec_command=MagicMock(return_value=(0, '', ''))) action_base._connection._shell = shell = MagicMock(append_command=MagicMock(return_value=('JOINED CMD'))) action_base._connection.become = become = MagicMock() become.build_become_command.return_value = 'foo' action_base._low_level_execute_command('ECHO', sudoable=True) become.build_become_command.assert_not_called() action_base._get_remote_user.return_value = 'apo' action_base._low_level_execute_command('ECHO', sudoable=True, executable='/bin/csh') become.build_become_command.assert_called_once_with("ECHO", shell) become.build_become_command.reset_mock() with patch.object(C, 'BECOME_ALLOW_SAME_USER', new=True): action_base._get_remote_user.return_value = 'root' action_base._low_level_execute_command('ECHO SAME', sudoable=True) become.build_become_command.assert_called_once_with("ECHO SAME", shell) def test__remote_expand_user_relative_pathing(self): action_base = _action_base() action_base._play_context.remote_addr = 'bar' action_base._low_level_execute_command = MagicMock(return_value={'stdout': b'../home/user'}) action_base._connection._shell.join_path.return_value = '../home/user/foo' with self.assertRaises(AnsibleError) as cm: action_base._remote_expand_user('~/foo') self.assertEqual( cm.exception.message, "'bar' returned an invalid relative home directory path containing '..'" ) class TestActionBaseCleanReturnedData(unittest.TestCase): def test(self): fake_loader = DictDataLoader({ }) mock_module_loader = MagicMock() mock_shared_loader_obj = MagicMock() mock_shared_loader_obj.module_loader = mock_module_loader connection_loader_paths = ['/tmp/asdfadf', '/usr/lib64/whatever', 'dfadfasf', 'foo.py', '.*', # FIXME: a path with parans breaks the regex # '(.*)', '/path/to/ansible/lib/ansible/plugins/connection/custom_connection.py', '/path/to/ansible/lib/ansible/plugins/connection/ssh.py'] def fake_all(path_only=None): for path in connection_loader_paths: yield path mock_connection_loader = MagicMock() mock_connection_loader.all = fake_all mock_shared_loader_obj.connection_loader = mock_connection_loader mock_connection = MagicMock() # mock_connection._shell.env_prefix.side_effect = env_prefix # action_base = DerivedActionBase(mock_task, mock_connection, play_context, None, None, None) action_base = DerivedActionBase(task=None, connection=mock_connection, play_context=None, loader=fake_loader, templar=None, shared_loader_obj=mock_shared_loader_obj) data = {'ansible_playbook_python': '/usr/bin/python', # 'ansible_rsync_path': '/usr/bin/rsync', 'ansible_python_interpreter': '/usr/bin/python', 'ansible_ssh_some_var': 'whatever', 'ansible_ssh_host_key_somehost': 'some key here', 'some_other_var': 'foo bar'} data = clean_facts(data) self.assertNotIn('ansible_playbook_python', data) self.assertNotIn('ansible_python_interpreter', data) self.assertIn('ansible_ssh_host_key_somehost', data) self.assertIn('some_other_var', data) class TestActionBaseParseReturnedData(unittest.TestCase): def test_fail_no_json(self): action_base = _action_base() rc = 0 stdout = 'foo\nbar\n' err = 'oopsy' returned_data = {'rc': rc, 'stdout': stdout, 'stdout_lines': stdout.splitlines(), 'stderr': err} res = action_base._parse_returned_data(returned_data) self.assertFalse(res['_ansible_parsed']) self.assertTrue(res['failed']) self.assertEqual(res['module_stderr'], err) def test_json_empty(self): action_base = _action_base() rc = 0 stdout = '{}\n' err = '' returned_data = {'rc': rc, 'stdout': stdout, 'stdout_lines': stdout.splitlines(), 'stderr': err} res = action_base._parse_returned_data(returned_data) del res['_ansible_parsed'] # we always have _ansible_parsed self.assertEqual(len(res), 0) self.assertFalse(res) def test_json_facts(self): action_base = _action_base() rc = 0 stdout = '{"ansible_facts": {"foo": "bar", "ansible_blip": "blip_value"}}\n' err = '' returned_data = {'rc': rc, 'stdout': stdout, 'stdout_lines': stdout.splitlines(), 'stderr': err} res = action_base._parse_returned_data(returned_data) self.assertTrue(res['ansible_facts']) self.assertIn('ansible_blip', res['ansible_facts']) # TODO: Should this be an AnsibleUnsafe? # self.assertIsInstance(res['ansible_facts'], AnsibleUnsafe) def test_json_facts_add_host(self): action_base = _action_base() rc = 0 stdout = '''{"ansible_facts": {"foo": "bar", "ansible_blip": "blip_value"}, "add_host": {"host_vars": {"some_key": ["whatever the add_host object is"]} } }\n''' err = '' returned_data = {'rc': rc, 'stdout': stdout, 'stdout_lines': stdout.splitlines(), 'stderr': err} res = action_base._parse_returned_data(returned_data) self.assertTrue(res['ansible_facts']) self.assertIn('ansible_blip', res['ansible_facts']) self.assertIn('add_host', res) # TODO: Should this be an AnsibleUnsafe? # self.assertIsInstance(res['ansible_facts'], AnsibleUnsafe)
closed
ansible/ansible
https://github.com/ansible/ansible
68,962
Include demo/example repo options in getting started page
<!--- Verify first that your improvement is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below, add suggestions to wording or structure --> Ansible provides a couple of good resources for learning/experimenting with Ansible. Add these to getting started: https://github.com/ansible/product-demos https://katacoda.com/rhel-labs <!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? --> ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME <!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure --> docs.ansible.com ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. OS version, browser, etc. --> ##### ADDITIONAL INFORMATION <!--- Describe how this improves the documentation, e.g. before/after situation or screenshots --> <!--- HINT: You can paste gist.github.com links for larger files -->
https://github.com/ansible/ansible/issues/68962
https://github.com/ansible/ansible/pull/71102
7c60dadb9a3832dee0014113a337bc10fa1088c0
5f8b45a70e8e4e378cdafde6cc6c39f32af39e65
2020-04-15T14:45:36Z
python
2020-08-07T21:09:19Z
docs/docsite/rst/user_guide/intro_getting_started.rst
.. _intro_getting_started: *************** Getting Started *************** Now that you have read the :ref:`installation guide<installation_guide>` and installed Ansible on a control node, you are ready to learn how Ansible works. A basic Ansible command or playbook: * selects machines to execute against from inventory * connects to those machines (or network devices, or other managed nodes), usually over SSH * copies one or more modules to the remote machines and starts execution there Ansible can do much more, but you should understand the most common use case before exploring all the powerful configuration, deployment, and orchestration features of Ansible. This page illustrates the basic process with a simple inventory and an ad-hoc command. Once you understand how Ansible works, you can read more details about :ref:`ad-hoc commands<intro_adhoc>`, organize your infrastructure with :ref:`inventory<intro_inventory>`, and harness the full power of Ansible with :ref:`playbooks<playbooks_intro>`. .. contents:: :local: Selecting machines from inventory ================================= Ansible reads information about which machines you want to manage from your inventory. Although you can pass an IP address to an ad-hoc command, you need inventory to take advantage of the full flexibility and repeatability of Ansible. Action: create a basic inventory -------------------------------- For this basic inventory, edit (or create) ``/etc/ansible/hosts`` and add a few remote systems to it. For this example, use either IP addresses or FQDNs: .. code-block:: text 192.0.2.50 aserver.example.org bserver.example.org Beyond the basics ----------------- Your inventory can store much more than IPs and FQDNs. You can create :ref:`aliases<inventory_aliases>`, set variable values for a single host with :ref:`host vars<host_variables>`, or set variable values for multiple hosts with :ref:`group vars<group_variables>`. .. _remote_connection_information: Connecting to remote nodes ========================== Ansible communicates with remote machines over the `SSH protocol <https://www.ssh.com/ssh/protocol/>`_. By default, Ansible uses native OpenSSH and connects to remote machines using your current user name, just as SSH does. Action: check your SSH connections ---------------------------------- Confirm that you can connect using SSH to all the nodes in your inventory using the same username. If necessary, add your public SSH key to the ``authorized_keys`` file on those systems. Beyond the basics ----------------- You can override the default remote user name in several ways, including: * passing the ``-u`` parameter at the command line * setting user information in your inventory file * setting user information in your configuration file * setting environment variables See :ref:`general_precedence_rules` for details on the (sometimes unintuitive) precedence of each method of passing user information. You can read more about connections in :ref:`connections`. Copying and executing modules ============================= Once it has connected, Ansible transfers the modules required by your command or playbook to the remote machine(s) for execution. Action: run your first Ansible commands --------------------------------------- Use the ping module to ping all the nodes in your inventory: .. code-block:: bash $ ansible all -m ping Now run a live command on all of your nodes: .. code-block:: bash $ ansible all -a "/bin/echo hello" You should see output for each host in your inventory, similar to this: .. code-block:: ansible-output aserver.example.org | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": false, "ping": "pong" } Beyond the basics ----------------- By default Ansible uses SFTP to transfer files. If the machine or device you want to manage does not support SFTP, you can switch to SCP mode in :ref:`intro_configuration`. The files are placed in a temporary directory and executed from there. If you need privilege escalation (sudo and similar) to run a command, pass the ``become`` flags: .. code-block:: bash # as bruce $ ansible all -m ping -u bruce # as bruce, sudoing to root (sudo is default method) $ ansible all -m ping -u bruce --become # as bruce, sudoing to batman $ ansible all -m ping -u bruce --become --become-user batman You can read more about privilege escalation in :ref:`become`. Congratulations! You have contacted your nodes using Ansible. You used a basic inventory file and an ad-hoc command to direct Ansible to connect to specific remote nodes, copy a module file there and execute it, and return output. You have a fully working infrastructure. Next steps ========== Next you can read about more real-world cases in :ref:`intro_adhoc`, explore what you can do with different modules, or read about the Ansible :ref:`working_with_playbooks` language. Ansible is not just about running commands, it also has powerful configuration management and deployment features. .. seealso:: :ref:`intro_inventory` More information about inventory :ref:`intro_adhoc` Examples of basic commands :ref:`working_with_playbooks` Learning Ansible's configuration management language `Ansible Demos <https://github.com/ansible/product-demos>`_ Demonstrations of different Ansible usecases `RHEL Labs <https://katacoda.com/rhel-labs>`_ Labs to provide further knowledge on different topics `Mailing List <https://groups.google.com/group/ansible-project>`_ Questions? Help? Ideas? Stop by the list on Google Groups `irc.freenode.net <http://irc.freenode.net>`_ #ansible IRC chat channel
closed
ansible/ansible
https://github.com/ansible/ansible
71,103
winrm Kerberos automatic ticket management doesn't work
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> WinRM kerberos tickets are not getting generated for the playbook tasks, the issue fails in 2.9.10 and 2.9.11 version, while it works with no code change in 2.9.9 In affected Ansible version (2.9.10 and 2.9.11), if I generate a Kerberos ticket (via kinit) in awx_task docker before running the playm the play works, in Ansible 2.9.9 the play will run with no manual ticket creation. Beside realms configuration (which I am sure they are right as the code works for Ansible 2.9.9), below are other krb5 configuration and I had always default_ccache_name disabled. ``` bash-4.4# cat /etc/krb5.conf [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = <DOMAIN NAME> dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false # default_ccache_name = KEYRING:persistent:%{uid} ``` Although if fails in AWX, in awx_docker command line, below works: ``` bash-4.4# cat sam_inven [windows] <HOST FQDN> [windows:vars] ansible_user=<AD ACCOUNT>@<DOMAIN NAME> ansible_password=<AD ACCOUNT Password> ansible_connection=winrm ansible_port=5986 ansible_winrm_transport=kerberos ansible_winrm_server_cert_validation=ignore ansible_winrm_kerberos_delegation=true become_method=runas ``` ``` bash-4.4# cat test-play.yml --- - name: PROXY TEST hosts: <HOST FQDN> gather_facts: no collections: - community.vmware tasks: - name: Wait for system connection via WinRM wait_for_connection: timeout: 20 register: result_wait - name: Show WinRM result debug: var: result_wait ``` ``` bash-4.4# ansible-playbook -i sam_inven test-play.yml PLAY [PROXY TEST] ************************************************************************************************************************************************** TASK [Wait for system connection via WinRM] ************************************************************************************************************************ ok: [<HOST FQDN>] TASK [Show WinRM result] ******************************************************************************************************************************************* ok: [<HOST FQDN>] => { "result_wait": { "changed": false, "elapsed": 1, "failed": false } } PLAY RECAP ********************************************************************************************************************************************************* <HOST FQDN> : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 bash-4.4# ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> WINRM Kerberos ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.9.11 config file = /etc/ansible/ansible.cfg configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below bash-4.4# ansible-config dump --only-changed bash-4.4# ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Ansible running as AWX docker images (official) based on CentOS Linux release 8.1.1911 (Core) Target hosts are Windows machines running Windows Server 2016 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> In AWX inventory host var: ansible_connection: winrm ansible_winrm_transport: kerberos ansible_winrm_server_cert_validation: ignore ansible_winrm_kerberos_delegation: true become_method: runas ansible_port: 5986 ```yaml --- - name: TEST hosts: localhost gather_facts: no tasks: - name: Wait for system connection via WinRM wait_for_connection: timeout: 60 register: result_wait delegate_to: <host FQDN> ignore_errors: yes - name: Show WinRM result debug: var: result_wait ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ``` { "elapsed": 1, "_ansible_no_log": false, "changed": false, "_ansible_delegated_vars": { "ansible_host": "<HOST FQDN>" } } ``` ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below { "msg": "timed out waiting for ping module test success: kerberos: authGSSClientInit() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), (\"Can't find client principal <ACCOUNT NAME>@<DOMAIN NAME> in cache collection\", -1765328243))", "elapsed": 60, "_ansible_no_log": false, "changed": false, "_ansible_delegated_vars": { "ansible_host": "<HOST FQDN>", "ansible_port": 5986, "ansible_user": "<acount name>@<DOMAIN NAME>" } } ```
https://github.com/ansible/ansible/issues/71103
https://github.com/ansible/ansible/pull/71136
5f8b45a70e8e4e378cdafde6cc6c39f32af39e65
3f22f79e73af4398d03b0c1676bb8efde32ea607
2020-08-05T07:01:45Z
python
2020-08-07T23:06:32Z
changelogs/fragments/delegation_password.yml
closed
ansible/ansible
https://github.com/ansible/ansible
71,103
winrm Kerberos automatic ticket management doesn't work
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> WinRM kerberos tickets are not getting generated for the playbook tasks, the issue fails in 2.9.10 and 2.9.11 version, while it works with no code change in 2.9.9 In affected Ansible version (2.9.10 and 2.9.11), if I generate a Kerberos ticket (via kinit) in awx_task docker before running the playm the play works, in Ansible 2.9.9 the play will run with no manual ticket creation. Beside realms configuration (which I am sure they are right as the code works for Ansible 2.9.9), below are other krb5 configuration and I had always default_ccache_name disabled. ``` bash-4.4# cat /etc/krb5.conf [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = <DOMAIN NAME> dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false # default_ccache_name = KEYRING:persistent:%{uid} ``` Although if fails in AWX, in awx_docker command line, below works: ``` bash-4.4# cat sam_inven [windows] <HOST FQDN> [windows:vars] ansible_user=<AD ACCOUNT>@<DOMAIN NAME> ansible_password=<AD ACCOUNT Password> ansible_connection=winrm ansible_port=5986 ansible_winrm_transport=kerberos ansible_winrm_server_cert_validation=ignore ansible_winrm_kerberos_delegation=true become_method=runas ``` ``` bash-4.4# cat test-play.yml --- - name: PROXY TEST hosts: <HOST FQDN> gather_facts: no collections: - community.vmware tasks: - name: Wait for system connection via WinRM wait_for_connection: timeout: 20 register: result_wait - name: Show WinRM result debug: var: result_wait ``` ``` bash-4.4# ansible-playbook -i sam_inven test-play.yml PLAY [PROXY TEST] ************************************************************************************************************************************************** TASK [Wait for system connection via WinRM] ************************************************************************************************************************ ok: [<HOST FQDN>] TASK [Show WinRM result] ******************************************************************************************************************************************* ok: [<HOST FQDN>] => { "result_wait": { "changed": false, "elapsed": 1, "failed": false } } PLAY RECAP ********************************************************************************************************************************************************* <HOST FQDN> : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 bash-4.4# ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> WINRM Kerberos ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.9.11 config file = /etc/ansible/ansible.cfg configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below bash-4.4# ansible-config dump --only-changed bash-4.4# ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Ansible running as AWX docker images (official) based on CentOS Linux release 8.1.1911 (Core) Target hosts are Windows machines running Windows Server 2016 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> In AWX inventory host var: ansible_connection: winrm ansible_winrm_transport: kerberos ansible_winrm_server_cert_validation: ignore ansible_winrm_kerberos_delegation: true become_method: runas ansible_port: 5986 ```yaml --- - name: TEST hosts: localhost gather_facts: no tasks: - name: Wait for system connection via WinRM wait_for_connection: timeout: 60 register: result_wait delegate_to: <host FQDN> ignore_errors: yes - name: Show WinRM result debug: var: result_wait ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ``` { "elapsed": 1, "_ansible_no_log": false, "changed": false, "_ansible_delegated_vars": { "ansible_host": "<HOST FQDN>" } } ``` ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below { "msg": "timed out waiting for ping module test success: kerberos: authGSSClientInit() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), (\"Can't find client principal <ACCOUNT NAME>@<DOMAIN NAME> in cache collection\", -1765328243))", "elapsed": 60, "_ansible_no_log": false, "changed": false, "_ansible_delegated_vars": { "ansible_host": "<HOST FQDN>", "ansible_port": 5986, "ansible_user": "<acount name>@<DOMAIN NAME>" } } ```
https://github.com/ansible/ansible/issues/71103
https://github.com/ansible/ansible/pull/71136
5f8b45a70e8e4e378cdafde6cc6c39f32af39e65
3f22f79e73af4398d03b0c1676bb8efde32ea607
2020-08-05T07:01:45Z
python
2020-08-07T23:06:32Z
lib/ansible/config/manager.py
# Copyright: (c) 2017, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import atexit import io import os import os.path import sys import stat import tempfile import traceback from collections import namedtuple from yaml import load as yaml_load try: # use C version if possible for speedup from yaml import CSafeLoader as SafeLoader except ImportError: from yaml import SafeLoader from ansible.config.data import ConfigData from ansible.errors import AnsibleOptionsError, AnsibleError from ansible.module_utils._text import to_text, to_bytes, to_native from ansible.module_utils.common._collections_compat import Sequence from ansible.module_utils.six import PY3, string_types from ansible.module_utils.six.moves import configparser from ansible.module_utils.parsing.convert_bool import boolean from ansible.parsing.quoting import unquote from ansible.parsing.yaml.objects import AnsibleVaultEncryptedUnicode from ansible.utils import py3compat from ansible.utils.path import cleanup_tmp_file, makedirs_safe, unfrackpath Plugin = namedtuple('Plugin', 'name type') Setting = namedtuple('Setting', 'name value origin type') INTERNAL_DEFS = {'lookup': ('_terms',)} def _get_entry(plugin_type, plugin_name, config): ''' construct entry for requested config ''' entry = '' if plugin_type: entry += 'plugin_type: %s ' % plugin_type if plugin_name: entry += 'plugin: %s ' % plugin_name entry += 'setting: %s ' % config return entry # FIXME: see if we can unify in module_utils with similar function used by argspec def ensure_type(value, value_type, origin=None): ''' return a configuration variable with casting :arg value: The value to ensure correct typing of :kwarg value_type: The type of the value. This can be any of the following strings: :boolean: sets the value to a True or False value :bool: Same as 'boolean' :integer: Sets the value to an integer or raises a ValueType error :int: Same as 'integer' :float: Sets the value to a float or raises a ValueType error :list: Treats the value as a comma separated list. Split the value and return it as a python list. :none: Sets the value to None :path: Expands any environment variables and tilde's in the value. :tmppath: Create a unique temporary directory inside of the directory specified by value and return its path. :temppath: Same as 'tmppath' :tmp: Same as 'tmppath' :pathlist: Treat the value as a typical PATH string. (On POSIX, this means colon separated strings.) Split the value and then expand each part for environment variables and tildes. :pathspec: Treat the value as a PATH string. Expands any environment variables tildes's in the value. :str: Sets the value to string types. :string: Same as 'str' ''' errmsg = '' basedir = None if origin and os.path.isabs(origin) and os.path.exists(to_bytes(origin)): basedir = origin if value_type: value_type = value_type.lower() if value is not None: if value_type in ('boolean', 'bool'): value = boolean(value, strict=False) elif value_type in ('integer', 'int'): value = int(value) elif value_type == 'float': value = float(value) elif value_type == 'list': if isinstance(value, string_types): value = [x.strip() for x in value.split(',')] elif not isinstance(value, Sequence): errmsg = 'list' elif value_type == 'none': if value == "None": value = None if value is not None: errmsg = 'None' elif value_type == 'path': if isinstance(value, string_types): value = resolve_path(value, basedir=basedir) else: errmsg = 'path' elif value_type in ('tmp', 'temppath', 'tmppath'): if isinstance(value, string_types): value = resolve_path(value, basedir=basedir) if not os.path.exists(value): makedirs_safe(value, 0o700) prefix = 'ansible-local-%s' % os.getpid() value = tempfile.mkdtemp(prefix=prefix, dir=value) atexit.register(cleanup_tmp_file, value, warn=True) else: errmsg = 'temppath' elif value_type == 'pathspec': if isinstance(value, string_types): value = value.split(os.pathsep) if isinstance(value, Sequence): value = [resolve_path(x, basedir=basedir) for x in value] else: errmsg = 'pathspec' elif value_type == 'pathlist': if isinstance(value, string_types): value = [x.strip() for x in value.split(',')] if isinstance(value, Sequence): value = [resolve_path(x, basedir=basedir) for x in value] else: errmsg = 'pathlist' elif value_type in ('str', 'string'): if isinstance(value, (string_types, AnsibleVaultEncryptedUnicode)): value = unquote(to_text(value, errors='surrogate_or_strict')) else: errmsg = 'string' # defaults to string type elif isinstance(value, (string_types, AnsibleVaultEncryptedUnicode)): value = unquote(to_text(value, errors='surrogate_or_strict')) if errmsg: raise ValueError('Invalid type provided for "%s": %s' % (errmsg, to_native(value))) return to_text(value, errors='surrogate_or_strict', nonstring='passthru') # FIXME: see if this can live in utils/path def resolve_path(path, basedir=None): ''' resolve relative or 'variable' paths ''' if '{{CWD}}' in path: # allow users to force CWD using 'magic' {{CWD}} path = path.replace('{{CWD}}', os.getcwd()) return unfrackpath(path, follow=False, basedir=basedir) # FIXME: generic file type? def get_config_type(cfile): ftype = None if cfile is not None: ext = os.path.splitext(cfile)[-1] if ext in ('.ini', '.cfg'): ftype = 'ini' elif ext in ('.yaml', '.yml'): ftype = 'yaml' else: raise AnsibleOptionsError("Unsupported configuration file extension for %s: %s" % (cfile, to_native(ext))) return ftype # FIXME: can move to module_utils for use for ini plugins also? def get_ini_config_value(p, entry): ''' returns the value of last ini entry found ''' value = None if p is not None: try: value = p.get(entry.get('section', 'defaults'), entry.get('key', ''), raw=True) except Exception: # FIXME: actually report issues here pass return value def find_ini_config_file(warnings=None): ''' Load INI Config File order(first found is used): ENV, CWD, HOME, /etc/ansible ''' # FIXME: eventually deprecate ini configs if warnings is None: # Note: In this case, warnings does nothing warnings = set() # A value that can never be a valid path so that we can tell if ANSIBLE_CONFIG was set later # We can't use None because we could set path to None. SENTINEL = object potential_paths = [] # Environment setting path_from_env = os.getenv("ANSIBLE_CONFIG", SENTINEL) if path_from_env is not SENTINEL: path_from_env = unfrackpath(path_from_env, follow=False) if os.path.isdir(to_bytes(path_from_env)): path_from_env = os.path.join(path_from_env, "ansible.cfg") potential_paths.append(path_from_env) # Current working directory warn_cmd_public = False try: cwd = os.getcwd() perms = os.stat(cwd) cwd_cfg = os.path.join(cwd, "ansible.cfg") if perms.st_mode & stat.S_IWOTH: # Working directory is world writable so we'll skip it. # Still have to look for a file here, though, so that we know if we have to warn if os.path.exists(cwd_cfg): warn_cmd_public = True else: potential_paths.append(to_text(cwd_cfg, errors='surrogate_or_strict')) except OSError: # If we can't access cwd, we'll simply skip it as a possible config source pass # Per user location potential_paths.append(unfrackpath("~/.ansible.cfg", follow=False)) # System location potential_paths.append("/etc/ansible/ansible.cfg") for path in potential_paths: b_path = to_bytes(path) if os.path.exists(b_path) and os.access(b_path, os.R_OK): break else: path = None # Emit a warning if all the following are true: # * We did not use a config from ANSIBLE_CONFIG # * There's an ansible.cfg in the current working directory that we skipped if path_from_env != path and warn_cmd_public: warnings.add(u"Ansible is being run in a world writable directory (%s)," u" ignoring it as an ansible.cfg source." u" For more information see" u" https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir" % to_text(cwd)) return path class ConfigManager(object): DEPRECATED = [] WARNINGS = set() def __init__(self, conf_file=None, defs_file=None): self._base_defs = {} self._plugins = {} self._parsers = {} self._config_file = conf_file self.data = ConfigData() self._base_defs = self._read_config_yaml_file(defs_file or ('%s/base.yml' % os.path.dirname(__file__))) if self._config_file is None: # set config using ini self._config_file = find_ini_config_file(self.WARNINGS) # consume configuration if self._config_file: # initialize parser and read config self._parse_config_file() # update constants self.update_config_data() def _read_config_yaml_file(self, yml_file): # TODO: handle relative paths as relative to the directory containing the current playbook instead of CWD # Currently this is only used with absolute paths to the `ansible/config` directory yml_file = to_bytes(yml_file) if os.path.exists(yml_file): with open(yml_file, 'rb') as config_def: return yaml_load(config_def, Loader=SafeLoader) or {} raise AnsibleError( "Missing base YAML definition file (bad install?): %s" % to_native(yml_file)) def _parse_config_file(self, cfile=None): ''' return flat configuration settings from file(s) ''' # TODO: take list of files with merge/nomerge if cfile is None: cfile = self._config_file ftype = get_config_type(cfile) if cfile is not None: if ftype == 'ini': self._parsers[cfile] = configparser.ConfigParser() with open(to_bytes(cfile), 'rb') as f: try: cfg_text = to_text(f.read(), errors='surrogate_or_strict') except UnicodeError as e: raise AnsibleOptionsError("Error reading config file(%s) because the config file was not utf8 encoded: %s" % (cfile, to_native(e))) try: if PY3: self._parsers[cfile].read_string(cfg_text) else: cfg_file = io.StringIO(cfg_text) self._parsers[cfile].readfp(cfg_file) except configparser.Error as e: raise AnsibleOptionsError("Error reading config file (%s): %s" % (cfile, to_native(e))) # FIXME: this should eventually handle yaml config files # elif ftype == 'yaml': # with open(cfile, 'rb') as config_stream: # self._parsers[cfile] = yaml.safe_load(config_stream) else: raise AnsibleOptionsError("Unsupported configuration file type: %s" % to_native(ftype)) def _find_yaml_config_files(self): ''' Load YAML Config Files in order, check merge flags, keep origin of settings''' pass def get_plugin_options(self, plugin_type, name, keys=None, variables=None, direct=None): options = {} defs = self.get_configuration_definitions(plugin_type, name) for option in defs: options[option] = self.get_config_value(option, plugin_type=plugin_type, plugin_name=name, keys=keys, variables=variables, direct=direct) return options def get_plugin_vars(self, plugin_type, name): pvars = [] for pdef in self.get_configuration_definitions(plugin_type, name).values(): if 'vars' in pdef and pdef['vars']: for var_entry in pdef['vars']: pvars.append(var_entry['name']) return pvars def get_configuration_definition(self, name, plugin_type=None, plugin_name=None): ret = {} if plugin_type is None: ret = self._base_defs.get(name, None) elif plugin_name is None: ret = self._plugins.get(plugin_type, {}).get(name, None) else: ret = self._plugins.get(plugin_type, {}).get(plugin_name, {}).get(name, None) return ret def get_configuration_definitions(self, plugin_type=None, name=None): ''' just list the possible settings, either base or for specific plugins or plugin ''' ret = {} if plugin_type is None: ret = self._base_defs elif name is None: ret = self._plugins.get(plugin_type, {}) else: ret = self._plugins.get(plugin_type, {}).get(name, {}) return ret def _loop_entries(self, container, entry_list): ''' repeat code for value entry assignment ''' value = None origin = None for entry in entry_list: name = entry.get('name') try: temp_value = container.get(name, None) except UnicodeEncodeError: self.WARNINGS.add(u'value for config entry {0} contains invalid characters, ignoring...'.format(to_text(name))) continue if temp_value is not None: # only set if entry is defined in container # inline vault variables should be converted to a text string if isinstance(temp_value, AnsibleVaultEncryptedUnicode): temp_value = to_text(temp_value, errors='surrogate_or_strict') value = temp_value origin = name # deal with deprecation of setting source, if used if 'deprecated' in entry: self.DEPRECATED.append((entry['name'], entry['deprecated'])) return value, origin def get_config_value(self, config, cfile=None, plugin_type=None, plugin_name=None, keys=None, variables=None, direct=None): ''' wrapper ''' try: value, _drop = self.get_config_value_and_origin(config, cfile=cfile, plugin_type=plugin_type, plugin_name=plugin_name, keys=keys, variables=variables, direct=direct) except AnsibleError: raise except Exception as e: raise AnsibleError("Unhandled exception when retrieving %s:\n%s" % (config, to_native(e)), orig_exc=e) return value def get_config_value_and_origin(self, config, cfile=None, plugin_type=None, plugin_name=None, keys=None, variables=None, direct=None): ''' Given a config key figure out the actual value and report on the origin of the settings ''' if cfile is None: # use default config cfile = self._config_file # Note: sources that are lists listed in low to high precedence (last one wins) value = None origin = None defs = self.get_configuration_definitions(plugin_type, plugin_name) if config in defs: # direct setting via plugin arguments, can set to None so we bypass rest of processing/defaults direct_aliases = [] if direct: direct_aliases = [direct[alias] for alias in defs[config].get('aliases', []) if alias in direct] if direct and config in direct: value = direct[config] origin = 'Direct' elif direct and direct_aliases: value = direct_aliases[0] origin = 'Direct' else: # Use 'variable overrides' if present, highest precedence, but only present when querying running play if variables and defs[config].get('vars'): value, origin = self._loop_entries(variables, defs[config]['vars']) origin = 'var: %s' % origin # use playbook keywords if you have em if value is None and keys and config in keys: value, origin = keys[config], 'keyword' origin = 'keyword: %s' % origin # env vars are next precedence if value is None and defs[config].get('env'): value, origin = self._loop_entries(py3compat.environ, defs[config]['env']) origin = 'env: %s' % origin # try config file entries next, if we have one if self._parsers.get(cfile, None) is None: self._parse_config_file(cfile) if value is None and cfile is not None: ftype = get_config_type(cfile) if ftype and defs[config].get(ftype): if ftype == 'ini': # load from ini config try: # FIXME: generalize _loop_entries to allow for files also, most of this code is dupe for ini_entry in defs[config]['ini']: temp_value = get_ini_config_value(self._parsers[cfile], ini_entry) if temp_value is not None: value = temp_value origin = cfile if 'deprecated' in ini_entry: self.DEPRECATED.append(('[%s]%s' % (ini_entry['section'], ini_entry['key']), ini_entry['deprecated'])) except Exception as e: sys.stderr.write("Error while loading ini config %s: %s" % (cfile, to_native(e))) elif ftype == 'yaml': # FIXME: implement, also , break down key from defs (. notation???) origin = cfile # set default if we got here w/o a value if value is None: if defs[config].get('required', False): if not plugin_type or config not in INTERNAL_DEFS.get(plugin_type, {}): raise AnsibleError("No setting was provided for required configuration %s" % to_native(_get_entry(plugin_type, plugin_name, config))) else: value = defs[config].get('default') origin = 'default' # skip typing as this is a templated default that will be resolved later in constants, which has needed vars if plugin_type is None and isinstance(value, string_types) and (value.startswith('{{') and value.endswith('}}')): return value, origin # ensure correct type, can raise exceptions on mismatched types try: value = ensure_type(value, defs[config].get('type'), origin=origin) except ValueError as e: if origin.startswith('env:') and value == '': # this is empty env var for non string so we can set to default origin = 'default' value = ensure_type(defs[config].get('default'), defs[config].get('type'), origin=origin) else: raise AnsibleOptionsError('Invalid type for configuration option %s: %s' % (to_native(_get_entry(plugin_type, plugin_name, config)), to_native(e))) # deal with deprecation of the setting if 'deprecated' in defs[config] and origin != 'default': self.DEPRECATED.append((config, defs[config].get('deprecated'))) else: raise AnsibleError('Requested entry (%s) was not defined in configuration.' % to_native(_get_entry(plugin_type, plugin_name, config))) return value, origin def initialize_plugin_configuration_definitions(self, plugin_type, name, defs): if plugin_type not in self._plugins: self._plugins[plugin_type] = {} self._plugins[plugin_type][name] = defs def update_config_data(self, defs=None, configfile=None): ''' really: update constants ''' if defs is None: defs = self._base_defs if configfile is None: configfile = self._config_file if not isinstance(defs, dict): raise AnsibleOptionsError("Invalid configuration definition type: %s for %s" % (type(defs), defs)) # update the constant for config file self.data.update_setting(Setting('CONFIG_FILE', configfile, '', 'string')) origin = None # env and config defs can have several entries, ordered in list from lowest to highest precedence for config in defs: if not isinstance(defs[config], dict): raise AnsibleOptionsError("Invalid configuration definition '%s': type is %s" % (to_native(config), type(defs[config]))) # get value and origin try: value, origin = self.get_config_value_and_origin(config, configfile) except Exception as e: # Printing the problem here because, in the current code: # (1) we can't reach the error handler for AnsibleError before we # hit a different error due to lack of working config. # (2) We don't have access to display yet because display depends on config # being properly loaded. # # If we start getting double errors printed from this section of code, then the # above problem #1 has been fixed. Revamp this to be more like the try: except # in get_config_value() at that time. sys.stderr.write("Unhandled error:\n %s\n\n" % traceback.format_exc()) raise AnsibleError("Invalid settings supplied for %s: %s\n" % (config, to_native(e)), orig_exc=e) # set the constant self.data.update_setting(Setting(config, value, origin, defs[config].get('type', 'string')))
closed
ansible/ansible
https://github.com/ansible/ansible
71,103
winrm Kerberos automatic ticket management doesn't work
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> WinRM kerberos tickets are not getting generated for the playbook tasks, the issue fails in 2.9.10 and 2.9.11 version, while it works with no code change in 2.9.9 In affected Ansible version (2.9.10 and 2.9.11), if I generate a Kerberos ticket (via kinit) in awx_task docker before running the playm the play works, in Ansible 2.9.9 the play will run with no manual ticket creation. Beside realms configuration (which I am sure they are right as the code works for Ansible 2.9.9), below are other krb5 configuration and I had always default_ccache_name disabled. ``` bash-4.4# cat /etc/krb5.conf [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = <DOMAIN NAME> dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false # default_ccache_name = KEYRING:persistent:%{uid} ``` Although if fails in AWX, in awx_docker command line, below works: ``` bash-4.4# cat sam_inven [windows] <HOST FQDN> [windows:vars] ansible_user=<AD ACCOUNT>@<DOMAIN NAME> ansible_password=<AD ACCOUNT Password> ansible_connection=winrm ansible_port=5986 ansible_winrm_transport=kerberos ansible_winrm_server_cert_validation=ignore ansible_winrm_kerberos_delegation=true become_method=runas ``` ``` bash-4.4# cat test-play.yml --- - name: PROXY TEST hosts: <HOST FQDN> gather_facts: no collections: - community.vmware tasks: - name: Wait for system connection via WinRM wait_for_connection: timeout: 20 register: result_wait - name: Show WinRM result debug: var: result_wait ``` ``` bash-4.4# ansible-playbook -i sam_inven test-play.yml PLAY [PROXY TEST] ************************************************************************************************************************************************** TASK [Wait for system connection via WinRM] ************************************************************************************************************************ ok: [<HOST FQDN>] TASK [Show WinRM result] ******************************************************************************************************************************************* ok: [<HOST FQDN>] => { "result_wait": { "changed": false, "elapsed": 1, "failed": false } } PLAY RECAP ********************************************************************************************************************************************************* <HOST FQDN> : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 bash-4.4# ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> WINRM Kerberos ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.9.11 config file = /etc/ansible/ansible.cfg configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below bash-4.4# ansible-config dump --only-changed bash-4.4# ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Ansible running as AWX docker images (official) based on CentOS Linux release 8.1.1911 (Core) Target hosts are Windows machines running Windows Server 2016 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> In AWX inventory host var: ansible_connection: winrm ansible_winrm_transport: kerberos ansible_winrm_server_cert_validation: ignore ansible_winrm_kerberos_delegation: true become_method: runas ansible_port: 5986 ```yaml --- - name: TEST hosts: localhost gather_facts: no tasks: - name: Wait for system connection via WinRM wait_for_connection: timeout: 60 register: result_wait delegate_to: <host FQDN> ignore_errors: yes - name: Show WinRM result debug: var: result_wait ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ``` { "elapsed": 1, "_ansible_no_log": false, "changed": false, "_ansible_delegated_vars": { "ansible_host": "<HOST FQDN>" } } ``` ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below { "msg": "timed out waiting for ping module test success: kerberos: authGSSClientInit() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), (\"Can't find client principal <ACCOUNT NAME>@<DOMAIN NAME> in cache collection\", -1765328243))", "elapsed": 60, "_ansible_no_log": false, "changed": false, "_ansible_delegated_vars": { "ansible_host": "<HOST FQDN>", "ansible_port": 5986, "ansible_user": "<acount name>@<DOMAIN NAME>" } } ```
https://github.com/ansible/ansible/issues/71103
https://github.com/ansible/ansible/pull/71136
5f8b45a70e8e4e378cdafde6cc6c39f32af39e65
3f22f79e73af4398d03b0c1676bb8efde32ea607
2020-08-05T07:01:45Z
python
2020-08-07T23:06:32Z
lib/ansible/plugins/connection/psrp.py
# Copyright (c) 2018 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type DOCUMENTATION = """ author: Ansible Core Team connection: psrp short_description: Run tasks over Microsoft PowerShell Remoting Protocol description: - Run commands or put/fetch on a target via PSRP (WinRM plugin) - This is similar to the I(winrm) connection plugin which uses the same underlying transport but instead runs in a PowerShell interpreter. version_added: "2.7" requirements: - pypsrp (Python library) options: # transport options remote_addr: description: - The hostname or IP address of the remote host. default: inventory_hostname type: str vars: - name: ansible_host - name: ansible_psrp_host remote_user: description: - The user to log in as. type: str vars: - name: ansible_user - name: ansible_psrp_user remote_password: description: Authentication password for the C(remote_user). Can be supplied as CLI option. type: str vars: - name: ansible_password - name: ansible_winrm_pass - name: ansible_winrm_password aliases: [ password ] port: description: - The port for PSRP to connect on the remote target. - Default is C(5986) if I(protocol) is not defined or is C(https), otherwise the port is C(5985). type: int vars: - name: ansible_port - name: ansible_psrp_port protocol: description: - Set the protocol to use for the connection. - Default is C(https) if I(port) is not defined or I(port) is not C(5985). choices: - http - https type: str vars: - name: ansible_psrp_protocol path: description: - The URI path to connect to. type: str vars: - name: ansible_psrp_path default: 'wsman' auth: description: - The authentication protocol to use when authenticating the remote user. - The default, C(negotiate), will attempt to use C(Kerberos) if it is available and fall back to C(NTLM) if it isn't. type: str vars: - name: ansible_psrp_auth choices: - basic - certificate - negotiate - kerberos - ntlm - credssp default: negotiate cert_validation: description: - Whether to validate the remote server's certificate or not. - Set to C(ignore) to not validate any certificates. - I(ca_cert) can be set to the path of a PEM certificate chain to use in the validation. choices: - validate - ignore default: validate type: str vars: - name: ansible_psrp_cert_validation ca_cert: description: - The path to a PEM certificate chain to use when validating the server's certificate. - This value is ignored if I(cert_validation) is set to C(ignore). type: path vars: - name: ansible_psrp_cert_trust_path - name: ansible_psrp_ca_cert aliases: [ cert_trust_path ] connection_timeout: description: - The connection timeout for making the request to the remote host. - This is measured in seconds. type: int vars: - name: ansible_psrp_connection_timeout default: 30 read_timeout: description: - The read timeout for receiving data from the remote host. - This value must always be greater than I(operation_timeout). - This option requires pypsrp >= 0.3. - This is measured in seconds. type: int vars: - name: ansible_psrp_read_timeout default: 30 version_added: '2.8' reconnection_retries: description: - The number of retries on connection errors. type: int vars: - name: ansible_psrp_reconnection_retries default: 0 version_added: '2.8' reconnection_backoff: description: - The backoff time to use in between reconnection attempts. (First sleeps X, then sleeps 2*X, then sleeps 4*X, ...) - This is measured in seconds. - The C(ansible_psrp_reconnection_backoff) variable was added in Ansible 2.9. type: int vars: - name: ansible_psrp_connection_backoff - name: ansible_psrp_reconnection_backoff default: 2 version_added: '2.8' message_encryption: description: - Controls the message encryption settings, this is different from TLS encryption when I(ansible_psrp_protocol) is C(https). - Only the auth protocols C(negotiate), C(kerberos), C(ntlm), and C(credssp) can do message encryption. The other authentication protocols only support encryption when C(protocol) is set to C(https). - C(auto) means means message encryption is only used when not using TLS/HTTPS. - C(always) is the same as C(auto) but message encryption is always used even when running over TLS/HTTPS. - C(never) disables any encryption checks that are in place when running over HTTP and disables any authentication encryption processes. type: str vars: - name: ansible_psrp_message_encryption choices: - auto - always - never default: auto proxy: description: - Set the proxy URL to use when connecting to the remote host. vars: - name: ansible_psrp_proxy type: str ignore_proxy: description: - Will disable any environment proxy settings and connect directly to the remote host. - This option is ignored if C(proxy) is set. vars: - name: ansible_psrp_ignore_proxy type: bool default: 'no' # auth options certificate_key_pem: description: - The local path to an X509 certificate key to use with certificate auth. type: path vars: - name: ansible_psrp_certificate_key_pem certificate_pem: description: - The local path to an X509 certificate to use with certificate auth. type: path vars: - name: ansible_psrp_certificate_pem credssp_auth_mechanism: description: - The sub authentication mechanism to use with CredSSP auth. - When C(auto), both Kerberos and NTLM is attempted with kerberos being preferred. type: str choices: - auto - kerberos - ntlm default: auto vars: - name: ansible_psrp_credssp_auth_mechanism credssp_disable_tlsv1_2: description: - Disables the use of TLSv1.2 on the CredSSP authentication channel. - This should not be set to C(yes) unless dealing with a host that does not have TLSv1.2. default: no type: bool vars: - name: ansible_psrp_credssp_disable_tlsv1_2 credssp_minimum_version: description: - The minimum CredSSP server authentication version that will be accepted. - Set to C(5) to ensure the server has been patched and is not vulnerable to CVE 2018-0886. default: 2 type: int vars: - name: ansible_psrp_credssp_minimum_version negotiate_delegate: description: - Allow the remote user the ability to delegate it's credentials to another server, i.e. credential delegation. - Only valid when Kerberos was the negotiated auth or was explicitly set as the authentication. - Ignored when NTLM was the negotiated auth. type: bool vars: - name: ansible_psrp_negotiate_delegate negotiate_hostname_override: description: - Override the remote hostname when searching for the host in the Kerberos lookup. - This allows Ansible to connect over IP but authenticate with the remote server using it's DNS name. - Only valid when Kerberos was the negotiated auth or was explicitly set as the authentication. - Ignored when NTLM was the negotiated auth. type: str vars: - name: ansible_psrp_negotiate_hostname_override negotiate_send_cbt: description: - Send the Channel Binding Token (CBT) structure when authenticating. - CBT is used to provide extra protection against Man in the Middle C(MitM) attacks by binding the outer transport channel to the auth channel. - CBT is not used when using just C(HTTP), only C(HTTPS). default: yes type: bool vars: - name: ansible_psrp_negotiate_send_cbt negotiate_service: description: - Override the service part of the SPN used during Kerberos authentication. - Only valid when Kerberos was the negotiated auth or was explicitly set as the authentication. - Ignored when NTLM was the negotiated auth. default: WSMAN type: str vars: - name: ansible_psrp_negotiate_service # protocol options operation_timeout: description: - Sets the WSMan timeout for each operation. - This is measured in seconds. - This should not exceed the value for C(connection_timeout). type: int vars: - name: ansible_psrp_operation_timeout default: 20 max_envelope_size: description: - Sets the maximum size of each WSMan message sent to the remote host. - This is measured in bytes. - Defaults to C(150KiB) for compatibility with older hosts. type: int vars: - name: ansible_psrp_max_envelope_size default: 153600 configuration_name: description: - The name of the PowerShell configuration endpoint to connect to. type: str vars: - name: ansible_psrp_configuration_name default: Microsoft.PowerShell """ import base64 import json import logging import os from ansible import constants as C from ansible.errors import AnsibleConnectionFailure, AnsibleError from ansible.errors import AnsibleFileNotFound from ansible.module_utils.parsing.convert_bool import boolean from ansible.module_utils._text import to_bytes, to_native, to_text from ansible.plugins.connection import ConnectionBase from ansible.plugins.shell.powershell import _common_args from ansible.utils.display import Display from ansible.utils.hashing import secure_hash HAS_PYPSRP = True PYPSRP_IMP_ERR = None try: import pypsrp from pypsrp.complex_objects import GenericComplexObject, PSInvocationState, RunspacePoolState from pypsrp.exceptions import AuthenticationError, WinRMError from pypsrp.host import PSHost, PSHostUserInterface from pypsrp.powershell import PowerShell, RunspacePool from pypsrp.shell import Process, SignalCode, WinRS from pypsrp.wsman import WSMan, AUTH_KWARGS from requests.exceptions import ConnectionError, ConnectTimeout except ImportError as err: HAS_PYPSRP = False PYPSRP_IMP_ERR = err display = Display() class Connection(ConnectionBase): transport = 'psrp' module_implementation_preferences = ('.ps1', '.exe', '') allow_executable = False has_pipelining = True allow_extras = True def __init__(self, *args, **kwargs): self.always_pipeline_modules = True self.has_native_async = True self.runspace = None self.host = None self._shell_type = 'powershell' super(Connection, self).__init__(*args, **kwargs) if not C.DEFAULT_DEBUG: logging.getLogger('pypsrp').setLevel(logging.WARNING) logging.getLogger('requests_credssp').setLevel(logging.INFO) logging.getLogger('urllib3').setLevel(logging.INFO) def _connect(self): if not HAS_PYPSRP: raise AnsibleError("pypsrp or dependencies are not installed: %s" % to_native(PYPSRP_IMP_ERR)) super(Connection, self)._connect() self._build_kwargs() display.vvv("ESTABLISH PSRP CONNECTION FOR USER: %s ON PORT %s TO %s" % (self._psrp_user, self._psrp_port, self._psrp_host), host=self._psrp_host) if not self.runspace: connection = WSMan(**self._psrp_conn_kwargs) # create our psuedo host to capture the exit code and host output host_ui = PSHostUserInterface() self.host = PSHost(None, None, False, "Ansible PSRP Host", None, host_ui, None) self.runspace = RunspacePool( connection, host=self.host, configuration_name=self._psrp_configuration_name ) display.vvvvv( "PSRP OPEN RUNSPACE: auth=%s configuration=%s endpoint=%s" % (self._psrp_auth, self._psrp_configuration_name, connection.transport.endpoint), host=self._psrp_host ) try: self.runspace.open() except AuthenticationError as e: raise AnsibleConnectionFailure("failed to authenticate with " "the server: %s" % to_native(e)) except WinRMError as e: raise AnsibleConnectionFailure( "psrp connection failure during runspace open: %s" % to_native(e) ) except (ConnectionError, ConnectTimeout) as e: raise AnsibleConnectionFailure( "Failed to connect to the host via PSRP: %s" % to_native(e) ) self._connected = True return self def reset(self): display.vvvvv("PSRP: Reset Connection", host=self._psrp_host) self.runspace = None self._connect() def exec_command(self, cmd, in_data=None, sudoable=True): super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable) if cmd.startswith(" ".join(_common_args) + " -EncodedCommand"): # This is a PowerShell script encoded by the shell plugin, we will # decode the script and execute it in the runspace instead of # starting a new interpreter to save on time b_command = base64.b64decode(cmd.split(" ")[-1]) script = to_text(b_command, 'utf-16-le') in_data = to_text(in_data, errors="surrogate_or_strict", nonstring="passthru") if in_data and in_data.startswith(u"#!"): # ANSIBALLZ wrapper, we need to get the interpreter and execute # that as the script - note this won't work as basic.py relies # on packages not available on Windows, once fixed we can enable # this path interpreter = to_native(in_data.splitlines()[0][2:]) # script = "$input | &'%s' -" % interpreter # in_data = to_text(in_data) raise AnsibleError("cannot run the interpreter '%s' on the psrp " "connection plugin" % interpreter) # call build_module_command to get the bootstrap wrapper text bootstrap_wrapper = self._shell.build_module_command('', '', '') if bootstrap_wrapper == cmd: # Do not display to the user each invocation of the bootstrap wrapper display.vvv("PSRP: EXEC (via pipeline wrapper)") else: display.vvv("PSRP: EXEC %s" % script, host=self._psrp_host) else: # In other cases we want to execute the cmd as the script. We add on the 'exit $LASTEXITCODE' to ensure the # rc is propagated back to the connection plugin. script = to_text(u"%s\nexit $LASTEXITCODE" % cmd) display.vvv(u"PSRP: EXEC %s" % script, host=self._psrp_host) rc, stdout, stderr = self._exec_psrp_script(script, in_data) return rc, stdout, stderr def put_file(self, in_path, out_path): super(Connection, self).put_file(in_path, out_path) display.vvv("PUT %s TO %s" % (in_path, out_path), host=self._psrp_host) out_path = self._shell._unquote(out_path) script = u'''begin { $ErrorActionPreference = "Stop" $path = '%s' $fd = [System.IO.File]::Create($path) $algo = [System.Security.Cryptography.SHA1CryptoServiceProvider]::Create() $bytes = @() } process { $bytes = [System.Convert]::FromBase64String($input) $algo.TransformBlock($bytes, 0, $bytes.Length, $bytes, 0) > $null $fd.Write($bytes, 0, $bytes.Length) } end { $fd.Close() $algo.TransformFinalBlock($bytes, 0, 0) > $null $hash = [System.BitConverter]::ToString($algo.Hash) $hash = $hash.Replace("-", "").ToLowerInvariant() Write-Output -InputObject "{`"sha1`":`"$hash`"}" }''' % self._shell._escape(out_path) cmd_parts = self._shell._encode_script(script, as_list=True, strict_mode=False, preserve_rc=False) b_in_path = to_bytes(in_path, errors='surrogate_or_strict') if not os.path.exists(b_in_path): raise AnsibleFileNotFound('file or module does not exist: "%s"' % to_native(in_path)) in_size = os.path.getsize(b_in_path) buffer_size = int(self.runspace.connection.max_payload_size / 4 * 3) # copying files is faster when using the raw WinRM shell and not PSRP # we will create a WinRS shell just for this process # TODO: speed this up as there is overhead creating a shell for this with WinRS(self.runspace.connection, codepage=65001) as shell: process = Process(shell, cmd_parts[0], cmd_parts[1:]) process.begin_invoke() offset = 0 with open(b_in_path, 'rb') as src_file: for data in iter((lambda: src_file.read(buffer_size)), b""): offset += len(data) display.vvvvv("PSRP PUT %s to %s (offset=%d, size=%d" % (in_path, out_path, offset, len(data)), host=self._psrp_host) b64_data = base64.b64encode(data) + b"\r\n" process.send(b64_data, end=(src_file.tell() == in_size)) # the file was empty, return empty buffer if offset == 0: process.send(b"", end=True) process.end_invoke() process.signal(SignalCode.CTRL_C) if process.rc != 0: raise AnsibleError(to_native(process.stderr)) put_output = json.loads(process.stdout) remote_sha1 = put_output.get("sha1") if not remote_sha1: raise AnsibleError("Remote sha1 was not returned, stdout: '%s', " "stderr: '%s'" % (to_native(process.stdout), to_native(process.stderr))) local_sha1 = secure_hash(in_path) if not remote_sha1 == local_sha1: raise AnsibleError("Remote sha1 hash %s does not match local hash " "%s" % (to_native(remote_sha1), to_native(local_sha1))) def fetch_file(self, in_path, out_path): super(Connection, self).fetch_file(in_path, out_path) display.vvv("FETCH %s TO %s" % (in_path, out_path), host=self._psrp_host) in_path = self._shell._unquote(in_path) out_path = out_path.replace('\\', '/') # because we are dealing with base64 data we need to get the max size # of the bytes that the base64 size would equal max_b64_size = int(self.runspace.connection.max_payload_size - (self.runspace.connection.max_payload_size / 4 * 3)) buffer_size = max_b64_size - (max_b64_size % 1024) # setup the file stream with read only mode setup_script = '''$ErrorActionPreference = "Stop" $path = '%s' if (Test-Path -Path $path -PathType Leaf) { $fs = New-Object -TypeName System.IO.FileStream -ArgumentList @( $path, [System.IO.FileMode]::Open, [System.IO.FileAccess]::Read, [System.IO.FileShare]::Read ) $buffer_size = %d } elseif (Test-Path -Path $path -PathType Container) { Write-Output -InputObject "[DIR]" } else { Write-Error -Message "$path does not exist" $host.SetShouldExit(1) }''' % (self._shell._escape(in_path), buffer_size) # read the file stream at the offset and return the b64 string read_script = '''$ErrorActionPreference = "Stop" $fs.Seek(%d, [System.IO.SeekOrigin]::Begin) > $null $buffer = New-Object -TypeName byte[] -ArgumentList $buffer_size $bytes_read = $fs.Read($buffer, 0, $buffer_size) if ($bytes_read -gt 0) { $bytes = $buffer[0..($bytes_read - 1)] Write-Output -InputObject ([System.Convert]::ToBase64String($bytes)) }''' # need to run the setup script outside of the local scope so the # file stream stays active between fetch operations rc, stdout, stderr = self._exec_psrp_script(setup_script, use_local_scope=False, force_stop=True) if rc != 0: raise AnsibleError("failed to setup file stream for fetch '%s': %s" % (out_path, to_native(stderr))) elif stdout.strip() == '[DIR]': # to be consistent with other connection plugins, we assume the caller has created the target dir return b_out_path = to_bytes(out_path, errors='surrogate_or_strict') # to be consistent with other connection plugins, we assume the caller has created the target dir offset = 0 with open(b_out_path, 'wb') as out_file: while True: display.vvvvv("PSRP FETCH %s to %s (offset=%d" % (in_path, out_path, offset), host=self._psrp_host) rc, stdout, stderr = self._exec_psrp_script(read_script % offset, force_stop=True) if rc != 0: raise AnsibleError("failed to transfer file to '%s': %s" % (out_path, to_native(stderr))) data = base64.b64decode(stdout.strip()) out_file.write(data) if len(data) < buffer_size: break offset += len(data) rc, stdout, stderr = self._exec_psrp_script("$fs.Close()", force_stop=True) if rc != 0: display.warning("failed to close remote file stream of file " "'%s': %s" % (in_path, to_native(stderr))) def close(self): if self.runspace and self.runspace.state == RunspacePoolState.OPENED: display.vvvvv("PSRP CLOSE RUNSPACE: %s" % (self.runspace.id), host=self._psrp_host) self.runspace.close() self.runspace = None self._connected = False def _build_kwargs(self): self._psrp_host = self.get_option('remote_addr') self._psrp_user = self.get_option('remote_user') self._psrp_pass = self.get_option('remote_password') protocol = self.get_option('protocol') port = self.get_option('port') if protocol is None and port is None: protocol = 'https' port = 5986 elif protocol is None: protocol = 'https' if int(port) != 5985 else 'http' elif port is None: port = 5986 if protocol == 'https' else 5985 self._psrp_protocol = protocol self._psrp_port = int(port) self._psrp_path = self.get_option('path') self._psrp_auth = self.get_option('auth') # cert validation can either be a bool or a path to the cert cert_validation = self.get_option('cert_validation') cert_trust_path = self.get_option('ca_cert') if cert_validation == 'ignore': self._psrp_cert_validation = False elif cert_trust_path is not None: self._psrp_cert_validation = cert_trust_path else: self._psrp_cert_validation = True self._psrp_connection_timeout = self.get_option('connection_timeout') # Can be None self._psrp_read_timeout = self.get_option('read_timeout') # Can be None self._psrp_message_encryption = self.get_option('message_encryption') self._psrp_proxy = self.get_option('proxy') self._psrp_ignore_proxy = boolean(self.get_option('ignore_proxy')) self._psrp_operation_timeout = int(self.get_option('operation_timeout')) self._psrp_max_envelope_size = int(self.get_option('max_envelope_size')) self._psrp_configuration_name = self.get_option('configuration_name') self._psrp_reconnection_retries = int(self.get_option('reconnection_retries')) self._psrp_reconnection_backoff = float(self.get_option('reconnection_backoff')) self._psrp_certificate_key_pem = self.get_option('certificate_key_pem') self._psrp_certificate_pem = self.get_option('certificate_pem') self._psrp_credssp_auth_mechanism = self.get_option('credssp_auth_mechanism') self._psrp_credssp_disable_tlsv1_2 = self.get_option('credssp_disable_tlsv1_2') self._psrp_credssp_minimum_version = self.get_option('credssp_minimum_version') self._psrp_negotiate_send_cbt = self.get_option('negotiate_send_cbt') self._psrp_negotiate_delegate = self.get_option('negotiate_delegate') self._psrp_negotiate_hostname_override = self.get_option('negotiate_hostname_override') self._psrp_negotiate_service = self.get_option('negotiate_service') supported_args = [] for auth_kwarg in AUTH_KWARGS.values(): supported_args.extend(auth_kwarg) extra_args = set([v.replace('ansible_psrp_', '') for v in self.get_option('_extras')]) unsupported_args = extra_args.difference(supported_args) for arg in unsupported_args: display.warning("ansible_psrp_%s is unsupported by the current " "psrp version installed" % arg) self._psrp_conn_kwargs = dict( server=self._psrp_host, port=self._psrp_port, username=self._psrp_user, password=self._psrp_pass, ssl=self._psrp_protocol == 'https', path=self._psrp_path, auth=self._psrp_auth, cert_validation=self._psrp_cert_validation, connection_timeout=self._psrp_connection_timeout, encryption=self._psrp_message_encryption, proxy=self._psrp_proxy, no_proxy=self._psrp_ignore_proxy, max_envelope_size=self._psrp_max_envelope_size, operation_timeout=self._psrp_operation_timeout, certificate_key_pem=self._psrp_certificate_key_pem, certificate_pem=self._psrp_certificate_pem, credssp_auth_mechanism=self._psrp_credssp_auth_mechanism, credssp_disable_tlsv1_2=self._psrp_credssp_disable_tlsv1_2, credssp_minimum_version=self._psrp_credssp_minimum_version, negotiate_send_cbt=self._psrp_negotiate_send_cbt, negotiate_delegate=self._psrp_negotiate_delegate, negotiate_hostname_override=self._psrp_negotiate_hostname_override, negotiate_service=self._psrp_negotiate_service, ) # Check if PSRP version supports newer read_timeout argument (needs pypsrp 0.3.0+) if hasattr(pypsrp, 'FEATURES') and 'wsman_read_timeout' in pypsrp.FEATURES: self._psrp_conn_kwargs['read_timeout'] = self._psrp_read_timeout elif self._psrp_read_timeout is not None: display.warning("ansible_psrp_read_timeout is unsupported by the current psrp version installed, " "using ansible_psrp_connection_timeout value for read_timeout instead.") # Check if PSRP version supports newer reconnection_retries argument (needs pypsrp 0.3.0+) if hasattr(pypsrp, 'FEATURES') and 'wsman_reconnections' in pypsrp.FEATURES: self._psrp_conn_kwargs['reconnection_retries'] = self._psrp_reconnection_retries self._psrp_conn_kwargs['reconnection_backoff'] = self._psrp_reconnection_backoff else: if self._psrp_reconnection_retries is not None: display.warning("ansible_psrp_reconnection_retries is unsupported by the current psrp version installed.") if self._psrp_reconnection_backoff is not None: display.warning("ansible_psrp_reconnection_backoff is unsupported by the current psrp version installed.") # add in the extra args that were set for arg in extra_args.intersection(supported_args): option = self.get_option('_extras')['ansible_psrp_%s' % arg] self._psrp_conn_kwargs[arg] = option def _exec_psrp_script(self, script, input_data=None, use_local_scope=True, force_stop=False): ps = PowerShell(self.runspace) ps.add_script(script, use_local_scope=use_local_scope) ps.invoke(input=input_data) rc, stdout, stderr = self._parse_pipeline_result(ps) if force_stop: # This is usually not needed because we close the Runspace after our exec and we skip the call to close the # pipeline manually to save on some time. Set to True when running multiple exec calls in the same runspace. # Current pypsrp versions raise an exception if the current state was not RUNNING. We manually set it so we # can call stop without any issues. ps.state = PSInvocationState.RUNNING ps.stop() return rc, stdout, stderr def _parse_pipeline_result(self, pipeline): """ PSRP doesn't have the same concept as other protocols with its output. We need some extra logic to convert the pipeline streams and host output into the format that Ansible understands. :param pipeline: The finished PowerShell pipeline that invoked our commands :return: rc, stdout, stderr based on the pipeline output """ # we try and get the rc from our host implementation, this is set if # exit or $host.SetShouldExit() is called in our pipeline, if not we # set to 0 if the pipeline had not errors and 1 if it did rc = self.host.rc or (1 if pipeline.had_errors else 0) # TODO: figure out a better way of merging this with the host output stdout_list = [] for output in pipeline.output: # Not all pipeline outputs are a string or contain a __str__ value, # we will create our own output based on the properties of the # complex object if that is the case. if isinstance(output, GenericComplexObject) and output.to_string is None: obj_lines = output.property_sets for key, value in output.adapted_properties.items(): obj_lines.append(u"%s: %s" % (key, value)) for key, value in output.extended_properties.items(): obj_lines.append(u"%s: %s" % (key, value)) output_msg = u"\n".join(obj_lines) else: output_msg = to_text(output, nonstring='simplerepr') stdout_list.append(output_msg) if len(self.host.ui.stdout) > 0: stdout_list += self.host.ui.stdout stdout = u"\r\n".join(stdout_list) stderr_list = [] for error in pipeline.streams.error: # the error record is not as fully fleshed out like we usually get # in PS, we will manually create it here command_name = "%s : " % error.command_name if error.command_name else '' position = "%s\r\n" % error.invocation_position_message if error.invocation_position_message else '' error_msg = "%s%s\r\n%s" \ " + CategoryInfo : %s\r\n" \ " + FullyQualifiedErrorId : %s" \ % (command_name, str(error), position, error.message, error.fq_error) stacktrace = error.script_stacktrace if self._play_context.verbosity >= 3 and stacktrace is not None: error_msg += "\r\nStackTrace:\r\n%s" % stacktrace stderr_list.append(error_msg) if len(self.host.ui.stderr) > 0: stderr_list += self.host.ui.stderr stderr = u"\r\n".join([to_text(o) for o in stderr_list]) display.vvvvv("PSRP RC: %d" % rc, host=self._psrp_host) display.vvvvv("PSRP STDOUT: %s" % stdout, host=self._psrp_host) display.vvvvv("PSRP STDERR: %s" % stderr, host=self._psrp_host) # reset the host back output back to defaults, needed if running # multiple pipelines on the same RunspacePool self.host.rc = 0 self.host.ui.stdout = [] self.host.ui.stderr = [] return rc, to_bytes(stdout, encoding='utf-8'), to_bytes(stderr, encoding='utf-8')
closed
ansible/ansible
https://github.com/ansible/ansible
71,103
winrm Kerberos automatic ticket management doesn't work
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> WinRM kerberos tickets are not getting generated for the playbook tasks, the issue fails in 2.9.10 and 2.9.11 version, while it works with no code change in 2.9.9 In affected Ansible version (2.9.10 and 2.9.11), if I generate a Kerberos ticket (via kinit) in awx_task docker before running the playm the play works, in Ansible 2.9.9 the play will run with no manual ticket creation. Beside realms configuration (which I am sure they are right as the code works for Ansible 2.9.9), below are other krb5 configuration and I had always default_ccache_name disabled. ``` bash-4.4# cat /etc/krb5.conf [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = <DOMAIN NAME> dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false # default_ccache_name = KEYRING:persistent:%{uid} ``` Although if fails in AWX, in awx_docker command line, below works: ``` bash-4.4# cat sam_inven [windows] <HOST FQDN> [windows:vars] ansible_user=<AD ACCOUNT>@<DOMAIN NAME> ansible_password=<AD ACCOUNT Password> ansible_connection=winrm ansible_port=5986 ansible_winrm_transport=kerberos ansible_winrm_server_cert_validation=ignore ansible_winrm_kerberos_delegation=true become_method=runas ``` ``` bash-4.4# cat test-play.yml --- - name: PROXY TEST hosts: <HOST FQDN> gather_facts: no collections: - community.vmware tasks: - name: Wait for system connection via WinRM wait_for_connection: timeout: 20 register: result_wait - name: Show WinRM result debug: var: result_wait ``` ``` bash-4.4# ansible-playbook -i sam_inven test-play.yml PLAY [PROXY TEST] ************************************************************************************************************************************************** TASK [Wait for system connection via WinRM] ************************************************************************************************************************ ok: [<HOST FQDN>] TASK [Show WinRM result] ******************************************************************************************************************************************* ok: [<HOST FQDN>] => { "result_wait": { "changed": false, "elapsed": 1, "failed": false } } PLAY RECAP ********************************************************************************************************************************************************* <HOST FQDN> : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 bash-4.4# ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> WINRM Kerberos ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.9.11 config file = /etc/ansible/ansible.cfg configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below bash-4.4# ansible-config dump --only-changed bash-4.4# ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Ansible running as AWX docker images (official) based on CentOS Linux release 8.1.1911 (Core) Target hosts are Windows machines running Windows Server 2016 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> In AWX inventory host var: ansible_connection: winrm ansible_winrm_transport: kerberos ansible_winrm_server_cert_validation: ignore ansible_winrm_kerberos_delegation: true become_method: runas ansible_port: 5986 ```yaml --- - name: TEST hosts: localhost gather_facts: no tasks: - name: Wait for system connection via WinRM wait_for_connection: timeout: 60 register: result_wait delegate_to: <host FQDN> ignore_errors: yes - name: Show WinRM result debug: var: result_wait ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ``` { "elapsed": 1, "_ansible_no_log": false, "changed": false, "_ansible_delegated_vars": { "ansible_host": "<HOST FQDN>" } } ``` ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below { "msg": "timed out waiting for ping module test success: kerberos: authGSSClientInit() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), (\"Can't find client principal <ACCOUNT NAME>@<DOMAIN NAME> in cache collection\", -1765328243))", "elapsed": 60, "_ansible_no_log": false, "changed": false, "_ansible_delegated_vars": { "ansible_host": "<HOST FQDN>", "ansible_port": 5986, "ansible_user": "<acount name>@<DOMAIN NAME>" } } ```
https://github.com/ansible/ansible/issues/71103
https://github.com/ansible/ansible/pull/71136
5f8b45a70e8e4e378cdafde6cc6c39f32af39e65
3f22f79e73af4398d03b0c1676bb8efde32ea607
2020-08-05T07:01:45Z
python
2020-08-07T23:06:32Z
lib/ansible/plugins/connection/winrm.py
# (c) 2014, Chris Church <[email protected]> # Copyright (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type DOCUMENTATION = """ author: Ansible Core Team connection: winrm short_description: Run tasks over Microsoft's WinRM description: - Run commands or put/fetch on a target via WinRM - This plugin allows extra arguments to be passed that are supported by the protocol but not explicitly defined here. They should take the form of variables declared with the following pattern `ansible_winrm_<option>`. version_added: "2.0" requirements: - pywinrm (python library) options: # figure out more elegant 'delegation' remote_addr: description: - Address of the windows machine default: inventory_hostname vars: - name: ansible_host - name: ansible_winrm_host type: str remote_user: description: - The user to log in as to the Windows machine vars: - name: ansible_user - name: ansible_winrm_user type: str remote_password: description: Authentication password for the C(remote_user). Can be supplied as CLI option. vars: - name: ansible_password - name: ansible_winrm_pass - name: ansible_winrm_password type: str port: description: - port for winrm to connect on remote target - The default is the https (5986) port, if using http it should be 5985 vars: - name: ansible_port - name: ansible_winrm_port default: 5986 type: integer scheme: description: - URI scheme to use - If not set, then will default to C(https) or C(http) if I(port) is C(5985). choices: [http, https] vars: - name: ansible_winrm_scheme type: str path: description: URI path to connect to default: '/wsman' vars: - name: ansible_winrm_path type: str transport: description: - List of winrm transports to attempt to use (ssl, plaintext, kerberos, etc) - If None (the default) the plugin will try to automatically guess the correct list - The choices available depend on your version of pywinrm type: list vars: - name: ansible_winrm_transport kerberos_command: description: kerberos command to use to request a authentication ticket default: kinit vars: - name: ansible_winrm_kinit_cmd type: str kinit_args: description: - Extra arguments to pass to C(kinit) when getting the Kerberos authentication ticket. - By default no extra arguments are passed into C(kinit) unless I(ansible_winrm_kerberos_delegation) is also set. In that case C(-f) is added to the C(kinit) args so a forwardable ticket is retrieved. - If set, the args will overwrite any existing defaults for C(kinit), including C(-f) for a delegated ticket. type: str vars: - name: ansible_winrm_kinit_args version_added: '2.11' kerberos_mode: description: - kerberos usage mode. - The managed option means Ansible will obtain kerberos ticket. - While the manual one means a ticket must already have been obtained by the user. - If having issues with Ansible freezing when trying to obtain the Kerberos ticket, you can either set this to C(manual) and obtain it outside Ansible or install C(pexpect) through pip and try again. choices: [managed, manual] vars: - name: ansible_winrm_kinit_mode type: str connection_timeout: description: - Sets the operation and read timeout settings for the WinRM connection. - Corresponds to the C(operation_timeout_sec) and C(read_timeout_sec) args in pywinrm so avoid setting these vars with this one. - The default value is whatever is set in the installed version of pywinrm. vars: - name: ansible_winrm_connection_timeout type: int """ import base64 import logging import os import re import traceback import json import tempfile import shlex import subprocess HAVE_KERBEROS = False try: import kerberos HAVE_KERBEROS = True except ImportError: pass from ansible import constants as C from ansible.errors import AnsibleError, AnsibleConnectionFailure from ansible.errors import AnsibleFileNotFound from ansible.module_utils.json_utils import _filter_non_json_lines from ansible.module_utils.parsing.convert_bool import boolean from ansible.module_utils.six.moves.urllib.parse import urlunsplit from ansible.module_utils._text import to_bytes, to_native, to_text from ansible.module_utils.six import binary_type, PY3 from ansible.plugins.connection import ConnectionBase from ansible.plugins.shell.powershell import _parse_clixml from ansible.utils.hashing import secure_hash from ansible.utils.display import Display # getargspec is deprecated in favour of getfullargspec in Python 3 but # getfullargspec is not available in Python 2 if PY3: from inspect import getfullargspec as getargspec else: from inspect import getargspec try: import winrm from winrm import Response from winrm.protocol import Protocol import requests.exceptions HAS_WINRM = True except ImportError as e: HAS_WINRM = False WINRM_IMPORT_ERR = e try: import xmltodict HAS_XMLTODICT = True except ImportError as e: HAS_XMLTODICT = False XMLTODICT_IMPORT_ERR = e HAS_PEXPECT = False try: import pexpect # echo was added in pexpect 3.3+ which is newer than the RHEL package # we can only use pexpect for kerb auth if echo is a valid kwarg # https://github.com/ansible/ansible/issues/43462 if hasattr(pexpect, 'spawn'): argspec = getargspec(pexpect.spawn.__init__) if 'echo' in argspec.args: HAS_PEXPECT = True except ImportError as e: pass # used to try and parse the hostname and detect if IPv6 is being used try: import ipaddress HAS_IPADDRESS = True except ImportError: HAS_IPADDRESS = False display = Display() class Connection(ConnectionBase): '''WinRM connections over HTTP/HTTPS.''' transport = 'winrm' module_implementation_preferences = ('.ps1', '.exe', '') allow_executable = False has_pipelining = True allow_extras = True def __init__(self, *args, **kwargs): self.always_pipeline_modules = True self.has_native_async = True self.protocol = None self.shell_id = None self.delegate = None self._shell_type = 'powershell' super(Connection, self).__init__(*args, **kwargs) if not C.DEFAULT_DEBUG: logging.getLogger('requests_credssp').setLevel(logging.INFO) logging.getLogger('requests_kerberos').setLevel(logging.INFO) logging.getLogger('urllib3').setLevel(logging.INFO) def _build_winrm_kwargs(self): # this used to be in set_options, as win_reboot needs to be able to # override the conn timeout, we need to be able to build the args # after setting individual options. This is called by _connect before # starting the WinRM connection self._winrm_host = self.get_option('remote_addr') self._winrm_user = self.get_option('remote_user') self._winrm_pass = self.get_option('remote_password') self._winrm_port = self.get_option('port') self._winrm_scheme = self.get_option('scheme') # old behaviour, scheme should default to http if not set and the port # is 5985 otherwise https if self._winrm_scheme is None: self._winrm_scheme = 'http' if self._winrm_port == 5985 else 'https' self._winrm_path = self.get_option('path') self._kinit_cmd = self.get_option('kerberos_command') self._winrm_transport = self.get_option('transport') self._winrm_connection_timeout = self.get_option('connection_timeout') if hasattr(winrm, 'FEATURE_SUPPORTED_AUTHTYPES'): self._winrm_supported_authtypes = set(winrm.FEATURE_SUPPORTED_AUTHTYPES) else: # for legacy versions of pywinrm, use the values we know are supported self._winrm_supported_authtypes = set(['plaintext', 'ssl', 'kerberos']) # calculate transport if needed if self._winrm_transport is None or self._winrm_transport[0] is None: # TODO: figure out what we want to do with auto-transport selection in the face of NTLM/Kerb/CredSSP/Cert/Basic transport_selector = ['ssl'] if self._winrm_scheme == 'https' else ['plaintext'] if HAVE_KERBEROS and ((self._winrm_user and '@' in self._winrm_user)): self._winrm_transport = ['kerberos'] + transport_selector else: self._winrm_transport = transport_selector unsupported_transports = set(self._winrm_transport).difference(self._winrm_supported_authtypes) if unsupported_transports: raise AnsibleError('The installed version of WinRM does not support transport(s) %s' % to_native(list(unsupported_transports), nonstring='simplerepr')) # if kerberos is among our transports and there's a password specified, we're managing the tickets kinit_mode = self.get_option('kerberos_mode') if kinit_mode is None: # HACK: ideally, remove multi-transport stuff self._kerb_managed = "kerberos" in self._winrm_transport and (self._winrm_pass is not None and self._winrm_pass != "") elif kinit_mode == "managed": self._kerb_managed = True elif kinit_mode == "manual": self._kerb_managed = False # arg names we're going passing directly internal_kwarg_mask = set(['self', 'endpoint', 'transport', 'username', 'password', 'scheme', 'path', 'kinit_mode', 'kinit_cmd']) self._winrm_kwargs = dict(username=self._winrm_user, password=self._winrm_pass) argspec = getargspec(Protocol.__init__) supported_winrm_args = set(argspec.args) supported_winrm_args.update(internal_kwarg_mask) passed_winrm_args = set([v.replace('ansible_winrm_', '') for v in self.get_option('_extras')]) unsupported_args = passed_winrm_args.difference(supported_winrm_args) # warn for kwargs unsupported by the installed version of pywinrm for arg in unsupported_args: display.warning("ansible_winrm_{0} unsupported by pywinrm (is an up-to-date version of pywinrm installed?)".format(arg)) # pass through matching extras, excluding the list we want to treat specially for arg in passed_winrm_args.difference(internal_kwarg_mask).intersection(supported_winrm_args): self._winrm_kwargs[arg] = self.get_option('_extras')['ansible_winrm_%s' % arg] # Until pykerberos has enough goodies to implement a rudimentary kinit/klist, simplest way is to let each connection # auth itself with a private CCACHE. def _kerb_auth(self, principal, password): if password is None: password = "" self._kerb_ccache = tempfile.NamedTemporaryFile() display.vvvvv("creating Kerberos CC at %s" % self._kerb_ccache.name) krb5ccname = "FILE:%s" % self._kerb_ccache.name os.environ["KRB5CCNAME"] = krb5ccname krb5env = dict(KRB5CCNAME=krb5ccname) # Stores various flags to call with kinit, these could be explicit args set by 'ansible_winrm_kinit_args' OR # '-f' if kerberos delegation is requested (ansible_winrm_kerberos_delegation). kinit_cmdline = [self._kinit_cmd] kinit_args = self.get_option('kinit_args') if kinit_args: kinit_args = [to_text(a) for a in shlex.split(kinit_args) if a.strip()] kinit_cmdline.extend(kinit_args) elif boolean(self.get_option('_extras').get('ansible_winrm_kerberos_delegation', False)): kinit_cmdline.append('-f') kinit_cmdline.append(principal) # pexpect runs the process in its own pty so it can correctly send # the password as input even on MacOS which blocks subprocess from # doing so. Unfortunately it is not available on the built in Python # so we can only use it if someone has installed it if HAS_PEXPECT: proc_mechanism = "pexpect" command = kinit_cmdline.pop(0) password = to_text(password, encoding='utf-8', errors='surrogate_or_strict') display.vvvv("calling kinit with pexpect for principal %s" % principal) try: child = pexpect.spawn(command, kinit_cmdline, timeout=60, env=krb5env, echo=False) except pexpect.ExceptionPexpect as err: err_msg = "Kerberos auth failure when calling kinit cmd " \ "'%s': %s" % (command, to_native(err)) raise AnsibleConnectionFailure(err_msg) try: child.expect(".*:") child.sendline(password) except OSError as err: # child exited before the pass was sent, Ansible will raise # error based on the rc below, just display the error here display.vvvv("kinit with pexpect raised OSError: %s" % to_native(err)) # technically this is the stdout + stderr but to match the # subprocess error checking behaviour, we will call it stderr stderr = child.read() child.wait() rc = child.exitstatus else: proc_mechanism = "subprocess" password = to_bytes(password, encoding='utf-8', errors='surrogate_or_strict') display.vvvv("calling kinit with subprocess for principal %s" % principal) try: p = subprocess.Popen(kinit_cmdline, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=krb5env) except OSError as err: err_msg = "Kerberos auth failure when calling kinit cmd " \ "'%s': %s" % (self._kinit_cmd, to_native(err)) raise AnsibleConnectionFailure(err_msg) stdout, stderr = p.communicate(password + b'\n') rc = p.returncode != 0 if rc != 0: # one last attempt at making sure the password does not exist # in the output exp_msg = to_native(stderr.strip()) exp_msg = exp_msg.replace(to_native(password), "<redacted>") err_msg = "Kerberos auth failure for principal %s with %s: %s" \ % (principal, proc_mechanism, exp_msg) raise AnsibleConnectionFailure(err_msg) display.vvvvv("kinit succeeded for principal %s" % principal) def _winrm_connect(self): ''' Establish a WinRM connection over HTTP/HTTPS. ''' display.vvv("ESTABLISH WINRM CONNECTION FOR USER: %s on PORT %s TO %s" % (self._winrm_user, self._winrm_port, self._winrm_host), host=self._winrm_host) winrm_host = self._winrm_host if HAS_IPADDRESS: display.debug("checking if winrm_host %s is an IPv6 address" % winrm_host) try: ipaddress.IPv6Address(winrm_host) except ipaddress.AddressValueError: pass else: winrm_host = "[%s]" % winrm_host netloc = '%s:%d' % (winrm_host, self._winrm_port) endpoint = urlunsplit((self._winrm_scheme, netloc, self._winrm_path, '', '')) errors = [] for transport in self._winrm_transport: if transport == 'kerberos': if not HAVE_KERBEROS: errors.append('kerberos: the python kerberos library is not installed') continue if self._kerb_managed: self._kerb_auth(self._winrm_user, self._winrm_pass) display.vvvvv('WINRM CONNECT: transport=%s endpoint=%s' % (transport, endpoint), host=self._winrm_host) try: winrm_kwargs = self._winrm_kwargs.copy() if self._winrm_connection_timeout: winrm_kwargs['operation_timeout_sec'] = self._winrm_connection_timeout winrm_kwargs['read_timeout_sec'] = self._winrm_connection_timeout + 1 protocol = Protocol(endpoint, transport=transport, **winrm_kwargs) # open the shell from connect so we know we're able to talk to the server if not self.shell_id: self.shell_id = protocol.open_shell(codepage=65001) # UTF-8 display.vvvvv('WINRM OPEN SHELL: %s' % self.shell_id, host=self._winrm_host) return protocol except Exception as e: err_msg = to_text(e).strip() if re.search(to_text(r'Operation\s+?timed\s+?out'), err_msg, re.I): raise AnsibleError('the connection attempt timed out') m = re.search(to_text(r'Code\s+?(\d{3})'), err_msg) if m: code = int(m.groups()[0]) if code == 401: err_msg = 'the specified credentials were rejected by the server' elif code == 411: return protocol errors.append(u'%s: %s' % (transport, err_msg)) display.vvvvv(u'WINRM CONNECTION ERROR: %s\n%s' % (err_msg, to_text(traceback.format_exc())), host=self._winrm_host) if errors: raise AnsibleConnectionFailure(', '.join(map(to_native, errors))) else: raise AnsibleError('No transport found for WinRM connection') def _winrm_send_input(self, protocol, shell_id, command_id, stdin, eof=False): rq = {'env:Envelope': protocol._get_soap_header( resource_uri='http://schemas.microsoft.com/wbem/wsman/1/windows/shell/cmd', action='http://schemas.microsoft.com/wbem/wsman/1/windows/shell/Send', shell_id=shell_id)} stream = rq['env:Envelope'].setdefault('env:Body', {}).setdefault('rsp:Send', {})\ .setdefault('rsp:Stream', {}) stream['@Name'] = 'stdin' stream['@CommandId'] = command_id stream['#text'] = base64.b64encode(to_bytes(stdin)) if eof: stream['@End'] = 'true' protocol.send_message(xmltodict.unparse(rq)) def _winrm_exec(self, command, args=(), from_exec=False, stdin_iterator=None): if not self.protocol: self.protocol = self._winrm_connect() self._connected = True if from_exec: display.vvvvv("WINRM EXEC %r %r" % (command, args), host=self._winrm_host) else: display.vvvvvv("WINRM EXEC %r %r" % (command, args), host=self._winrm_host) command_id = None try: stdin_push_failed = False command_id = self.protocol.run_command(self.shell_id, to_bytes(command), map(to_bytes, args), console_mode_stdin=(stdin_iterator is None)) try: if stdin_iterator: for (data, is_last) in stdin_iterator: self._winrm_send_input(self.protocol, self.shell_id, command_id, data, eof=is_last) except Exception as ex: display.warning("ERROR DURING WINRM SEND INPUT - attempting to recover: %s %s" % (type(ex).__name__, to_text(ex))) display.debug(traceback.format_exc()) stdin_push_failed = True # NB: this can hang if the receiver is still running (eg, network failed a Send request but the server's still happy). # FUTURE: Consider adding pywinrm status check/abort operations to see if the target is still running after a failure. resptuple = self.protocol.get_command_output(self.shell_id, command_id) # ensure stdout/stderr are text for py3 # FUTURE: this should probably be done internally by pywinrm response = Response(tuple(to_text(v) if isinstance(v, binary_type) else v for v in resptuple)) # TODO: check result from response and set stdin_push_failed if we have nonzero if from_exec: display.vvvvv('WINRM RESULT %r' % to_text(response), host=self._winrm_host) else: display.vvvvvv('WINRM RESULT %r' % to_text(response), host=self._winrm_host) display.vvvvvv('WINRM STDOUT %s' % to_text(response.std_out), host=self._winrm_host) display.vvvvvv('WINRM STDERR %s' % to_text(response.std_err), host=self._winrm_host) if stdin_push_failed: # There are cases where the stdin input failed but the WinRM service still processed it. We attempt to # see if stdout contains a valid json return value so we can ignore this error try: filtered_output, dummy = _filter_non_json_lines(response.std_out) json.loads(filtered_output) except ValueError: # stdout does not contain a return response, stdin input was a fatal error stderr = to_bytes(response.std_err, encoding='utf-8') if stderr.startswith(b"#< CLIXML"): stderr = _parse_clixml(stderr) raise AnsibleError('winrm send_input failed; \nstdout: %s\nstderr %s' % (to_native(response.std_out), to_native(stderr))) return response except requests.exceptions.Timeout as exc: raise AnsibleConnectionFailure('winrm connection error: %s' % to_native(exc)) finally: if command_id: self.protocol.cleanup_command(self.shell_id, command_id) def _connect(self): if not HAS_WINRM: raise AnsibleError("winrm or requests is not installed: %s" % to_native(WINRM_IMPORT_ERR)) elif not HAS_XMLTODICT: raise AnsibleError("xmltodict is not installed: %s" % to_native(XMLTODICT_IMPORT_ERR)) super(Connection, self)._connect() if not self.protocol: self._build_winrm_kwargs() # build the kwargs from the options set self.protocol = self._winrm_connect() self._connected = True return self def reset(self): self.protocol = None self.shell_id = None self._connect() def _wrapper_payload_stream(self, payload, buffer_size=200000): payload_bytes = to_bytes(payload) byte_count = len(payload_bytes) for i in range(0, byte_count, buffer_size): yield payload_bytes[i:i + buffer_size], i + buffer_size >= byte_count def exec_command(self, cmd, in_data=None, sudoable=True): super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable) cmd_parts = self._shell._encode_script(cmd, as_list=True, strict_mode=False, preserve_rc=False) # TODO: display something meaningful here display.vvv("EXEC (via pipeline wrapper)") stdin_iterator = None if in_data: stdin_iterator = self._wrapper_payload_stream(in_data) result = self._winrm_exec(cmd_parts[0], cmd_parts[1:], from_exec=True, stdin_iterator=stdin_iterator) result.std_out = to_bytes(result.std_out) result.std_err = to_bytes(result.std_err) # parse just stderr from CLIXML output if result.std_err.startswith(b"#< CLIXML"): try: result.std_err = _parse_clixml(result.std_err) except Exception: # unsure if we're guaranteed a valid xml doc- use raw output in case of error pass return (result.status_code, result.std_out, result.std_err) # FUTURE: determine buffer size at runtime via remote winrm config? def _put_file_stdin_iterator(self, in_path, out_path, buffer_size=250000): in_size = os.path.getsize(to_bytes(in_path, errors='surrogate_or_strict')) offset = 0 with open(to_bytes(in_path, errors='surrogate_or_strict'), 'rb') as in_file: for out_data in iter((lambda: in_file.read(buffer_size)), b''): offset += len(out_data) self._display.vvvvv('WINRM PUT "%s" to "%s" (offset=%d size=%d)' % (in_path, out_path, offset, len(out_data)), host=self._winrm_host) # yes, we're double-encoding over the wire in this case- we want to ensure that the data shipped to the end PS pipeline is still b64-encoded b64_data = base64.b64encode(out_data) + b'\r\n' # cough up the data, as well as an indicator if this is the last chunk so winrm_send knows to set the End signal yield b64_data, (in_file.tell() == in_size) if offset == 0: # empty file, return an empty buffer + eof to close it yield "", True def put_file(self, in_path, out_path): super(Connection, self).put_file(in_path, out_path) out_path = self._shell._unquote(out_path) display.vvv('PUT "%s" TO "%s"' % (in_path, out_path), host=self._winrm_host) if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')): raise AnsibleFileNotFound('file or module does not exist: "%s"' % to_native(in_path)) script_template = u''' begin {{ $path = '{0}' $DebugPreference = "Continue" $ErrorActionPreference = "Stop" Set-StrictMode -Version 2 $fd = [System.IO.File]::Create($path) $sha1 = [System.Security.Cryptography.SHA1CryptoServiceProvider]::Create() $bytes = @() #initialize for empty file case }} process {{ $bytes = [System.Convert]::FromBase64String($input) $sha1.TransformBlock($bytes, 0, $bytes.Length, $bytes, 0) | Out-Null $fd.Write($bytes, 0, $bytes.Length) }} end {{ $sha1.TransformFinalBlock($bytes, 0, 0) | Out-Null $hash = [System.BitConverter]::ToString($sha1.Hash).Replace("-", "").ToLowerInvariant() $fd.Close() Write-Output "{{""sha1"":""$hash""}}" }} ''' script = script_template.format(self._shell._escape(out_path)) cmd_parts = self._shell._encode_script(script, as_list=True, strict_mode=False, preserve_rc=False) result = self._winrm_exec(cmd_parts[0], cmd_parts[1:], stdin_iterator=self._put_file_stdin_iterator(in_path, out_path)) # TODO: improve error handling if result.status_code != 0: raise AnsibleError(to_native(result.std_err)) try: put_output = json.loads(result.std_out) except ValueError: # stdout does not contain a valid response stderr = to_bytes(result.std_err, encoding='utf-8') if stderr.startswith(b"#< CLIXML"): stderr = _parse_clixml(stderr) raise AnsibleError('winrm put_file failed; \nstdout: %s\nstderr %s' % (to_native(result.std_out), to_native(stderr))) remote_sha1 = put_output.get("sha1") if not remote_sha1: raise AnsibleError("Remote sha1 was not returned") local_sha1 = secure_hash(in_path) if not remote_sha1 == local_sha1: raise AnsibleError("Remote sha1 hash {0} does not match local hash {1}".format(to_native(remote_sha1), to_native(local_sha1))) def fetch_file(self, in_path, out_path): super(Connection, self).fetch_file(in_path, out_path) in_path = self._shell._unquote(in_path) out_path = out_path.replace('\\', '/') # consistent with other connection plugins, we assume the caller has created the target dir display.vvv('FETCH "%s" TO "%s"' % (in_path, out_path), host=self._winrm_host) buffer_size = 2**19 # 0.5MB chunks out_file = None try: offset = 0 while True: try: script = ''' $path = "%(path)s" If (Test-Path -Path $path -PathType Leaf) { $buffer_size = %(buffer_size)d $offset = %(offset)d $stream = New-Object -TypeName IO.FileStream($path, [IO.FileMode]::Open, [IO.FileAccess]::Read, [IO.FileShare]::ReadWrite) $stream.Seek($offset, [System.IO.SeekOrigin]::Begin) > $null $buffer = New-Object -TypeName byte[] $buffer_size $bytes_read = $stream.Read($buffer, 0, $buffer_size) if ($bytes_read -gt 0) { $bytes = $buffer[0..($bytes_read - 1)] [System.Convert]::ToBase64String($bytes) } $stream.Close() > $null } ElseIf (Test-Path -Path $path -PathType Container) { Write-Host "[DIR]"; } Else { Write-Error "$path does not exist"; Exit 1; } ''' % dict(buffer_size=buffer_size, path=self._shell._escape(in_path), offset=offset) display.vvvvv('WINRM FETCH "%s" to "%s" (offset=%d)' % (in_path, out_path, offset), host=self._winrm_host) cmd_parts = self._shell._encode_script(script, as_list=True, preserve_rc=False) result = self._winrm_exec(cmd_parts[0], cmd_parts[1:]) if result.status_code != 0: raise IOError(to_native(result.std_err)) if result.std_out.strip() == '[DIR]': data = None else: data = base64.b64decode(result.std_out.strip()) if data is None: break else: if not out_file: # If out_path is a directory and we're expecting a file, bail out now. if os.path.isdir(to_bytes(out_path, errors='surrogate_or_strict')): break out_file = open(to_bytes(out_path, errors='surrogate_or_strict'), 'wb') out_file.write(data) if len(data) < buffer_size: break offset += len(data) except Exception: traceback.print_exc() raise AnsibleError('failed to transfer file to "%s"' % to_native(out_path)) finally: if out_file: out_file.close() def close(self): if self.protocol and self.shell_id: display.vvvvv('WINRM CLOSE SHELL: %s' % self.shell_id, host=self._winrm_host) self.protocol.close_shell(self.shell_id) self.shell_id = None self.protocol = None self._connected = False
closed
ansible/ansible
https://github.com/ansible/ansible
71,103
winrm Kerberos automatic ticket management doesn't work
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> WinRM kerberos tickets are not getting generated for the playbook tasks, the issue fails in 2.9.10 and 2.9.11 version, while it works with no code change in 2.9.9 In affected Ansible version (2.9.10 and 2.9.11), if I generate a Kerberos ticket (via kinit) in awx_task docker before running the playm the play works, in Ansible 2.9.9 the play will run with no manual ticket creation. Beside realms configuration (which I am sure they are right as the code works for Ansible 2.9.9), below are other krb5 configuration and I had always default_ccache_name disabled. ``` bash-4.4# cat /etc/krb5.conf [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = <DOMAIN NAME> dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false # default_ccache_name = KEYRING:persistent:%{uid} ``` Although if fails in AWX, in awx_docker command line, below works: ``` bash-4.4# cat sam_inven [windows] <HOST FQDN> [windows:vars] ansible_user=<AD ACCOUNT>@<DOMAIN NAME> ansible_password=<AD ACCOUNT Password> ansible_connection=winrm ansible_port=5986 ansible_winrm_transport=kerberos ansible_winrm_server_cert_validation=ignore ansible_winrm_kerberos_delegation=true become_method=runas ``` ``` bash-4.4# cat test-play.yml --- - name: PROXY TEST hosts: <HOST FQDN> gather_facts: no collections: - community.vmware tasks: - name: Wait for system connection via WinRM wait_for_connection: timeout: 20 register: result_wait - name: Show WinRM result debug: var: result_wait ``` ``` bash-4.4# ansible-playbook -i sam_inven test-play.yml PLAY [PROXY TEST] ************************************************************************************************************************************************** TASK [Wait for system connection via WinRM] ************************************************************************************************************************ ok: [<HOST FQDN>] TASK [Show WinRM result] ******************************************************************************************************************************************* ok: [<HOST FQDN>] => { "result_wait": { "changed": false, "elapsed": 1, "failed": false } } PLAY RECAP ********************************************************************************************************************************************************* <HOST FQDN> : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 bash-4.4# ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> WINRM Kerberos ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.9.11 config file = /etc/ansible/ansible.cfg configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below bash-4.4# ansible-config dump --only-changed bash-4.4# ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Ansible running as AWX docker images (official) based on CentOS Linux release 8.1.1911 (Core) Target hosts are Windows machines running Windows Server 2016 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> In AWX inventory host var: ansible_connection: winrm ansible_winrm_transport: kerberos ansible_winrm_server_cert_validation: ignore ansible_winrm_kerberos_delegation: true become_method: runas ansible_port: 5986 ```yaml --- - name: TEST hosts: localhost gather_facts: no tasks: - name: Wait for system connection via WinRM wait_for_connection: timeout: 60 register: result_wait delegate_to: <host FQDN> ignore_errors: yes - name: Show WinRM result debug: var: result_wait ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ``` { "elapsed": 1, "_ansible_no_log": false, "changed": false, "_ansible_delegated_vars": { "ansible_host": "<HOST FQDN>" } } ``` ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below { "msg": "timed out waiting for ping module test success: kerberos: authGSSClientInit() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), (\"Can't find client principal <ACCOUNT NAME>@<DOMAIN NAME> in cache collection\", -1765328243))", "elapsed": 60, "_ansible_no_log": false, "changed": false, "_ansible_delegated_vars": { "ansible_host": "<HOST FQDN>", "ansible_port": 5986, "ansible_user": "<acount name>@<DOMAIN NAME>" } } ```
https://github.com/ansible/ansible/issues/71103
https://github.com/ansible/ansible/pull/71136
5f8b45a70e8e4e378cdafde6cc6c39f32af39e65
3f22f79e73af4398d03b0c1676bb8efde32ea607
2020-08-05T07:01:45Z
python
2020-08-07T23:06:32Z
test/integration/targets/connection_delegation/action_plugins/delegation_action.py
closed
ansible/ansible
https://github.com/ansible/ansible
71,103
winrm Kerberos automatic ticket management doesn't work
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> WinRM kerberos tickets are not getting generated for the playbook tasks, the issue fails in 2.9.10 and 2.9.11 version, while it works with no code change in 2.9.9 In affected Ansible version (2.9.10 and 2.9.11), if I generate a Kerberos ticket (via kinit) in awx_task docker before running the playm the play works, in Ansible 2.9.9 the play will run with no manual ticket creation. Beside realms configuration (which I am sure they are right as the code works for Ansible 2.9.9), below are other krb5 configuration and I had always default_ccache_name disabled. ``` bash-4.4# cat /etc/krb5.conf [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = <DOMAIN NAME> dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false # default_ccache_name = KEYRING:persistent:%{uid} ``` Although if fails in AWX, in awx_docker command line, below works: ``` bash-4.4# cat sam_inven [windows] <HOST FQDN> [windows:vars] ansible_user=<AD ACCOUNT>@<DOMAIN NAME> ansible_password=<AD ACCOUNT Password> ansible_connection=winrm ansible_port=5986 ansible_winrm_transport=kerberos ansible_winrm_server_cert_validation=ignore ansible_winrm_kerberos_delegation=true become_method=runas ``` ``` bash-4.4# cat test-play.yml --- - name: PROXY TEST hosts: <HOST FQDN> gather_facts: no collections: - community.vmware tasks: - name: Wait for system connection via WinRM wait_for_connection: timeout: 20 register: result_wait - name: Show WinRM result debug: var: result_wait ``` ``` bash-4.4# ansible-playbook -i sam_inven test-play.yml PLAY [PROXY TEST] ************************************************************************************************************************************************** TASK [Wait for system connection via WinRM] ************************************************************************************************************************ ok: [<HOST FQDN>] TASK [Show WinRM result] ******************************************************************************************************************************************* ok: [<HOST FQDN>] => { "result_wait": { "changed": false, "elapsed": 1, "failed": false } } PLAY RECAP ********************************************************************************************************************************************************* <HOST FQDN> : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 bash-4.4# ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> WINRM Kerberos ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.9.11 config file = /etc/ansible/ansible.cfg configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below bash-4.4# ansible-config dump --only-changed bash-4.4# ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Ansible running as AWX docker images (official) based on CentOS Linux release 8.1.1911 (Core) Target hosts are Windows machines running Windows Server 2016 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> In AWX inventory host var: ansible_connection: winrm ansible_winrm_transport: kerberos ansible_winrm_server_cert_validation: ignore ansible_winrm_kerberos_delegation: true become_method: runas ansible_port: 5986 ```yaml --- - name: TEST hosts: localhost gather_facts: no tasks: - name: Wait for system connection via WinRM wait_for_connection: timeout: 60 register: result_wait delegate_to: <host FQDN> ignore_errors: yes - name: Show WinRM result debug: var: result_wait ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ``` { "elapsed": 1, "_ansible_no_log": false, "changed": false, "_ansible_delegated_vars": { "ansible_host": "<HOST FQDN>" } } ``` ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below { "msg": "timed out waiting for ping module test success: kerberos: authGSSClientInit() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), (\"Can't find client principal <ACCOUNT NAME>@<DOMAIN NAME> in cache collection\", -1765328243))", "elapsed": 60, "_ansible_no_log": false, "changed": false, "_ansible_delegated_vars": { "ansible_host": "<HOST FQDN>", "ansible_port": 5986, "ansible_user": "<acount name>@<DOMAIN NAME>" } } ```
https://github.com/ansible/ansible/issues/71103
https://github.com/ansible/ansible/pull/71136
5f8b45a70e8e4e378cdafde6cc6c39f32af39e65
3f22f79e73af4398d03b0c1676bb8efde32ea607
2020-08-05T07:01:45Z
python
2020-08-07T23:06:32Z
test/integration/targets/connection_delegation/aliases
closed
ansible/ansible
https://github.com/ansible/ansible
71,103
winrm Kerberos automatic ticket management doesn't work
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> WinRM kerberos tickets are not getting generated for the playbook tasks, the issue fails in 2.9.10 and 2.9.11 version, while it works with no code change in 2.9.9 In affected Ansible version (2.9.10 and 2.9.11), if I generate a Kerberos ticket (via kinit) in awx_task docker before running the playm the play works, in Ansible 2.9.9 the play will run with no manual ticket creation. Beside realms configuration (which I am sure they are right as the code works for Ansible 2.9.9), below are other krb5 configuration and I had always default_ccache_name disabled. ``` bash-4.4# cat /etc/krb5.conf [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = <DOMAIN NAME> dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false # default_ccache_name = KEYRING:persistent:%{uid} ``` Although if fails in AWX, in awx_docker command line, below works: ``` bash-4.4# cat sam_inven [windows] <HOST FQDN> [windows:vars] ansible_user=<AD ACCOUNT>@<DOMAIN NAME> ansible_password=<AD ACCOUNT Password> ansible_connection=winrm ansible_port=5986 ansible_winrm_transport=kerberos ansible_winrm_server_cert_validation=ignore ansible_winrm_kerberos_delegation=true become_method=runas ``` ``` bash-4.4# cat test-play.yml --- - name: PROXY TEST hosts: <HOST FQDN> gather_facts: no collections: - community.vmware tasks: - name: Wait for system connection via WinRM wait_for_connection: timeout: 20 register: result_wait - name: Show WinRM result debug: var: result_wait ``` ``` bash-4.4# ansible-playbook -i sam_inven test-play.yml PLAY [PROXY TEST] ************************************************************************************************************************************************** TASK [Wait for system connection via WinRM] ************************************************************************************************************************ ok: [<HOST FQDN>] TASK [Show WinRM result] ******************************************************************************************************************************************* ok: [<HOST FQDN>] => { "result_wait": { "changed": false, "elapsed": 1, "failed": false } } PLAY RECAP ********************************************************************************************************************************************************* <HOST FQDN> : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 bash-4.4# ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> WINRM Kerberos ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.9.11 config file = /etc/ansible/ansible.cfg configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below bash-4.4# ansible-config dump --only-changed bash-4.4# ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Ansible running as AWX docker images (official) based on CentOS Linux release 8.1.1911 (Core) Target hosts are Windows machines running Windows Server 2016 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> In AWX inventory host var: ansible_connection: winrm ansible_winrm_transport: kerberos ansible_winrm_server_cert_validation: ignore ansible_winrm_kerberos_delegation: true become_method: runas ansible_port: 5986 ```yaml --- - name: TEST hosts: localhost gather_facts: no tasks: - name: Wait for system connection via WinRM wait_for_connection: timeout: 60 register: result_wait delegate_to: <host FQDN> ignore_errors: yes - name: Show WinRM result debug: var: result_wait ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ``` { "elapsed": 1, "_ansible_no_log": false, "changed": false, "_ansible_delegated_vars": { "ansible_host": "<HOST FQDN>" } } ``` ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below { "msg": "timed out waiting for ping module test success: kerberos: authGSSClientInit() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), (\"Can't find client principal <ACCOUNT NAME>@<DOMAIN NAME> in cache collection\", -1765328243))", "elapsed": 60, "_ansible_no_log": false, "changed": false, "_ansible_delegated_vars": { "ansible_host": "<HOST FQDN>", "ansible_port": 5986, "ansible_user": "<acount name>@<DOMAIN NAME>" } } ```
https://github.com/ansible/ansible/issues/71103
https://github.com/ansible/ansible/pull/71136
5f8b45a70e8e4e378cdafde6cc6c39f32af39e65
3f22f79e73af4398d03b0c1676bb8efde32ea607
2020-08-05T07:01:45Z
python
2020-08-07T23:06:32Z
test/integration/targets/connection_delegation/connection_plugins/delegation_connection.py
closed
ansible/ansible
https://github.com/ansible/ansible
71,103
winrm Kerberos automatic ticket management doesn't work
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> WinRM kerberos tickets are not getting generated for the playbook tasks, the issue fails in 2.9.10 and 2.9.11 version, while it works with no code change in 2.9.9 In affected Ansible version (2.9.10 and 2.9.11), if I generate a Kerberos ticket (via kinit) in awx_task docker before running the playm the play works, in Ansible 2.9.9 the play will run with no manual ticket creation. Beside realms configuration (which I am sure they are right as the code works for Ansible 2.9.9), below are other krb5 configuration and I had always default_ccache_name disabled. ``` bash-4.4# cat /etc/krb5.conf [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = <DOMAIN NAME> dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false # default_ccache_name = KEYRING:persistent:%{uid} ``` Although if fails in AWX, in awx_docker command line, below works: ``` bash-4.4# cat sam_inven [windows] <HOST FQDN> [windows:vars] ansible_user=<AD ACCOUNT>@<DOMAIN NAME> ansible_password=<AD ACCOUNT Password> ansible_connection=winrm ansible_port=5986 ansible_winrm_transport=kerberos ansible_winrm_server_cert_validation=ignore ansible_winrm_kerberos_delegation=true become_method=runas ``` ``` bash-4.4# cat test-play.yml --- - name: PROXY TEST hosts: <HOST FQDN> gather_facts: no collections: - community.vmware tasks: - name: Wait for system connection via WinRM wait_for_connection: timeout: 20 register: result_wait - name: Show WinRM result debug: var: result_wait ``` ``` bash-4.4# ansible-playbook -i sam_inven test-play.yml PLAY [PROXY TEST] ************************************************************************************************************************************************** TASK [Wait for system connection via WinRM] ************************************************************************************************************************ ok: [<HOST FQDN>] TASK [Show WinRM result] ******************************************************************************************************************************************* ok: [<HOST FQDN>] => { "result_wait": { "changed": false, "elapsed": 1, "failed": false } } PLAY RECAP ********************************************************************************************************************************************************* <HOST FQDN> : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 bash-4.4# ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> WINRM Kerberos ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.9.11 config file = /etc/ansible/ansible.cfg configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below bash-4.4# ansible-config dump --only-changed bash-4.4# ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Ansible running as AWX docker images (official) based on CentOS Linux release 8.1.1911 (Core) Target hosts are Windows machines running Windows Server 2016 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> In AWX inventory host var: ansible_connection: winrm ansible_winrm_transport: kerberos ansible_winrm_server_cert_validation: ignore ansible_winrm_kerberos_delegation: true become_method: runas ansible_port: 5986 ```yaml --- - name: TEST hosts: localhost gather_facts: no tasks: - name: Wait for system connection via WinRM wait_for_connection: timeout: 60 register: result_wait delegate_to: <host FQDN> ignore_errors: yes - name: Show WinRM result debug: var: result_wait ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ``` { "elapsed": 1, "_ansible_no_log": false, "changed": false, "_ansible_delegated_vars": { "ansible_host": "<HOST FQDN>" } } ``` ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below { "msg": "timed out waiting for ping module test success: kerberos: authGSSClientInit() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), (\"Can't find client principal <ACCOUNT NAME>@<DOMAIN NAME> in cache collection\", -1765328243))", "elapsed": 60, "_ansible_no_log": false, "changed": false, "_ansible_delegated_vars": { "ansible_host": "<HOST FQDN>", "ansible_port": 5986, "ansible_user": "<acount name>@<DOMAIN NAME>" } } ```
https://github.com/ansible/ansible/issues/71103
https://github.com/ansible/ansible/pull/71136
5f8b45a70e8e4e378cdafde6cc6c39f32af39e65
3f22f79e73af4398d03b0c1676bb8efde32ea607
2020-08-05T07:01:45Z
python
2020-08-07T23:06:32Z
test/integration/targets/connection_delegation/inventory.ini
closed
ansible/ansible
https://github.com/ansible/ansible
71,103
winrm Kerberos automatic ticket management doesn't work
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> WinRM kerberos tickets are not getting generated for the playbook tasks, the issue fails in 2.9.10 and 2.9.11 version, while it works with no code change in 2.9.9 In affected Ansible version (2.9.10 and 2.9.11), if I generate a Kerberos ticket (via kinit) in awx_task docker before running the playm the play works, in Ansible 2.9.9 the play will run with no manual ticket creation. Beside realms configuration (which I am sure they are right as the code works for Ansible 2.9.9), below are other krb5 configuration and I had always default_ccache_name disabled. ``` bash-4.4# cat /etc/krb5.conf [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = <DOMAIN NAME> dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false # default_ccache_name = KEYRING:persistent:%{uid} ``` Although if fails in AWX, in awx_docker command line, below works: ``` bash-4.4# cat sam_inven [windows] <HOST FQDN> [windows:vars] ansible_user=<AD ACCOUNT>@<DOMAIN NAME> ansible_password=<AD ACCOUNT Password> ansible_connection=winrm ansible_port=5986 ansible_winrm_transport=kerberos ansible_winrm_server_cert_validation=ignore ansible_winrm_kerberos_delegation=true become_method=runas ``` ``` bash-4.4# cat test-play.yml --- - name: PROXY TEST hosts: <HOST FQDN> gather_facts: no collections: - community.vmware tasks: - name: Wait for system connection via WinRM wait_for_connection: timeout: 20 register: result_wait - name: Show WinRM result debug: var: result_wait ``` ``` bash-4.4# ansible-playbook -i sam_inven test-play.yml PLAY [PROXY TEST] ************************************************************************************************************************************************** TASK [Wait for system connection via WinRM] ************************************************************************************************************************ ok: [<HOST FQDN>] TASK [Show WinRM result] ******************************************************************************************************************************************* ok: [<HOST FQDN>] => { "result_wait": { "changed": false, "elapsed": 1, "failed": false } } PLAY RECAP ********************************************************************************************************************************************************* <HOST FQDN> : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 bash-4.4# ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> WINRM Kerberos ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.9.11 config file = /etc/ansible/ansible.cfg configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below bash-4.4# ansible-config dump --only-changed bash-4.4# ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Ansible running as AWX docker images (official) based on CentOS Linux release 8.1.1911 (Core) Target hosts are Windows machines running Windows Server 2016 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> In AWX inventory host var: ansible_connection: winrm ansible_winrm_transport: kerberos ansible_winrm_server_cert_validation: ignore ansible_winrm_kerberos_delegation: true become_method: runas ansible_port: 5986 ```yaml --- - name: TEST hosts: localhost gather_facts: no tasks: - name: Wait for system connection via WinRM wait_for_connection: timeout: 60 register: result_wait delegate_to: <host FQDN> ignore_errors: yes - name: Show WinRM result debug: var: result_wait ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ``` { "elapsed": 1, "_ansible_no_log": false, "changed": false, "_ansible_delegated_vars": { "ansible_host": "<HOST FQDN>" } } ``` ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below { "msg": "timed out waiting for ping module test success: kerberos: authGSSClientInit() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), (\"Can't find client principal <ACCOUNT NAME>@<DOMAIN NAME> in cache collection\", -1765328243))", "elapsed": 60, "_ansible_no_log": false, "changed": false, "_ansible_delegated_vars": { "ansible_host": "<HOST FQDN>", "ansible_port": 5986, "ansible_user": "<acount name>@<DOMAIN NAME>" } } ```
https://github.com/ansible/ansible/issues/71103
https://github.com/ansible/ansible/pull/71136
5f8b45a70e8e4e378cdafde6cc6c39f32af39e65
3f22f79e73af4398d03b0c1676bb8efde32ea607
2020-08-05T07:01:45Z
python
2020-08-07T23:06:32Z
test/integration/targets/connection_delegation/runme.sh
closed
ansible/ansible
https://github.com/ansible/ansible
71,103
winrm Kerberos automatic ticket management doesn't work
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> WinRM kerberos tickets are not getting generated for the playbook tasks, the issue fails in 2.9.10 and 2.9.11 version, while it works with no code change in 2.9.9 In affected Ansible version (2.9.10 and 2.9.11), if I generate a Kerberos ticket (via kinit) in awx_task docker before running the playm the play works, in Ansible 2.9.9 the play will run with no manual ticket creation. Beside realms configuration (which I am sure they are right as the code works for Ansible 2.9.9), below are other krb5 configuration and I had always default_ccache_name disabled. ``` bash-4.4# cat /etc/krb5.conf [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = <DOMAIN NAME> dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false # default_ccache_name = KEYRING:persistent:%{uid} ``` Although if fails in AWX, in awx_docker command line, below works: ``` bash-4.4# cat sam_inven [windows] <HOST FQDN> [windows:vars] ansible_user=<AD ACCOUNT>@<DOMAIN NAME> ansible_password=<AD ACCOUNT Password> ansible_connection=winrm ansible_port=5986 ansible_winrm_transport=kerberos ansible_winrm_server_cert_validation=ignore ansible_winrm_kerberos_delegation=true become_method=runas ``` ``` bash-4.4# cat test-play.yml --- - name: PROXY TEST hosts: <HOST FQDN> gather_facts: no collections: - community.vmware tasks: - name: Wait for system connection via WinRM wait_for_connection: timeout: 20 register: result_wait - name: Show WinRM result debug: var: result_wait ``` ``` bash-4.4# ansible-playbook -i sam_inven test-play.yml PLAY [PROXY TEST] ************************************************************************************************************************************************** TASK [Wait for system connection via WinRM] ************************************************************************************************************************ ok: [<HOST FQDN>] TASK [Show WinRM result] ******************************************************************************************************************************************* ok: [<HOST FQDN>] => { "result_wait": { "changed": false, "elapsed": 1, "failed": false } } PLAY RECAP ********************************************************************************************************************************************************* <HOST FQDN> : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 bash-4.4# ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> WINRM Kerberos ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.9.11 config file = /etc/ansible/ansible.cfg configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below bash-4.4# ansible-config dump --only-changed bash-4.4# ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Ansible running as AWX docker images (official) based on CentOS Linux release 8.1.1911 (Core) Target hosts are Windows machines running Windows Server 2016 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> In AWX inventory host var: ansible_connection: winrm ansible_winrm_transport: kerberos ansible_winrm_server_cert_validation: ignore ansible_winrm_kerberos_delegation: true become_method: runas ansible_port: 5986 ```yaml --- - name: TEST hosts: localhost gather_facts: no tasks: - name: Wait for system connection via WinRM wait_for_connection: timeout: 60 register: result_wait delegate_to: <host FQDN> ignore_errors: yes - name: Show WinRM result debug: var: result_wait ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ``` { "elapsed": 1, "_ansible_no_log": false, "changed": false, "_ansible_delegated_vars": { "ansible_host": "<HOST FQDN>" } } ``` ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below { "msg": "timed out waiting for ping module test success: kerberos: authGSSClientInit() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), (\"Can't find client principal <ACCOUNT NAME>@<DOMAIN NAME> in cache collection\", -1765328243))", "elapsed": 60, "_ansible_no_log": false, "changed": false, "_ansible_delegated_vars": { "ansible_host": "<HOST FQDN>", "ansible_port": 5986, "ansible_user": "<acount name>@<DOMAIN NAME>" } } ```
https://github.com/ansible/ansible/issues/71103
https://github.com/ansible/ansible/pull/71136
5f8b45a70e8e4e378cdafde6cc6c39f32af39e65
3f22f79e73af4398d03b0c1676bb8efde32ea607
2020-08-05T07:01:45Z
python
2020-08-07T23:06:32Z
test/integration/targets/connection_delegation/test.yml
closed
ansible/ansible
https://github.com/ansible/ansible
69,374
Update Ansible-maintained scenario guides to use FQCN
<!--- Verify first that your improvement is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below, add suggestions to wording or structure --> - [x] Update to use FQCN - [x] Add link to collection on Galaxy <!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? --> ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME <!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure --> docs.ansible.com ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. OS version, browser, etc. --> ##### ADDITIONAL INFORMATION <!--- Describe how this improves the documentation, e.g. before/after situation or screenshots --> <!--- HINT: You can paste gist.github.com links for larger files -->
https://github.com/ansible/ansible/issues/69374
https://github.com/ansible/ansible/pull/70369
ae1291004e2d6d127f70a2588012d5b1342d9dee
172230d7b80c8565c4d9d6b6a8b301932b6785c0
2020-05-07T15:57:08Z
python
2020-08-10T20:09:35Z
docs/docsite/rst/network/user_guide/platform_eos.rst
.. _eos_platform_options: *************************************** EOS Platform Options *************************************** Arista EOS supports multiple connections. This page offers details on how each connection works in Ansible and how to use it. .. contents:: Topics Connections Available ================================================================================ .. table:: :class: documentation-table ==================== ========================================== ========================= .. CLI eAPI ==================== ========================================== ========================= Protocol SSH HTTP(S) Credentials uses SSH keys / SSH-agent if present uses HTTPS certificates if present accepts ``-u myuser -k`` if using password Indirect Access via a bastion (jump host) via a web proxy Connection Settings ``ansible_connection: network_cli`` ``ansible_connection: httpapi`` OR ``ansible_connection: local`` with ``transport: eapi`` in the ``provider`` dictionary |enable_mode| supported: |br| supported: |br| * use ``ansible_become: yes`` * ``httpapi`` with ``ansible_become_method: enable`` uses ``ansible_become: yes`` with ``ansible_become_method: enable`` * ``local`` uses ``authorize: yes`` and ``auth_pass:`` in the ``provider`` dictionary Returned Data Format ``stdout[0].`` ``stdout[0].messages[0].`` ==================== ========================================== ========================= .. |enable_mode| replace:: Enable Mode |br| (Privilege Escalation) For legacy playbooks, EOS still supports ``ansible_connection: local``. We recommend modernizing to use ``ansible_connection: network_cli`` or ``ansible_connection: httpapi`` as soon as possible. Using CLI in Ansible ==================== Example CLI ``group_vars/eos.yml`` ---------------------------------- .. code-block:: yaml ansible_connection: network_cli ansible_network_os: eos ansible_user: myuser ansible_password: !vault... ansible_become: yes ansible_become_method: enable ansible_become_password: !vault... ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"' - If you are using SSH keys (including an ssh-agent) you can remove the ``ansible_password`` configuration. - If you are accessing your host directly (not through a bastion/jump host) you can remove the ``ansible_ssh_common_args`` configuration. - If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the ``ProxyCommand`` directive. To prevent secrets from leaking out (for example in ``ps`` output), SSH does not support providing passwords via environment variables. Example CLI Task ---------------- .. code-block:: yaml - name: Backup current switch config (eos) eos_config: backup: yes register: backup_eos_location when: ansible_network_os == 'eos' Using eAPI in Ansible ===================== Enabling eAPI ------------- Before you can use eAPI to connect to a switch, you must enable eAPI. To enable eAPI on a new switch via Ansible, use the ``eos_eapi`` module via the CLI connection. Set up group_vars/eos.yml just like in the CLI example above, then run a playbook task like this: .. code-block:: yaml - name: Enable eAPI eos_eapi: enable_http: yes enable_https: yes become: true become_method: enable when: ansible_network_os == 'eos' You can find more options for enabling HTTP/HTTPS connections in the :ref:`eos_eapi <eos_eapi_module>` module documentation. Once eAPI is enabled, change your ``group_vars/eos.yml`` to use the eAPI connection. Example eAPI ``group_vars/eos.yml`` ----------------------------------- .. code-block:: yaml ansible_connection: httpapi ansible_network_os: eos ansible_user: myuser ansible_password: !vault... ansible_become: yes ansible_become_method: enable proxy_env: http_proxy: http://proxy.example.com:8080 - If you are accessing your host directly (not through a web proxy) you can remove the ``proxy_env`` configuration. - If you are accessing your host through a web proxy using ``https``, change ``http_proxy`` to ``https_proxy``. Example eAPI Task ----------------- .. code-block:: yaml - name: Backup current switch config (eos) eos_config: backup: yes register: backup_eos_location environment: "{{ proxy_env }}" when: ansible_network_os == 'eos' In this example the ``proxy_env`` variable defined in ``group_vars`` gets passed to the ``environment`` option of the module in the task. eAPI examples with ``connection: local`` ----------------------------------------- ``group_vars/eos.yml``: .. code-block:: yaml ansible_connection: local ansible_network_os: eos ansible_user: myuser ansible_password: !vault... eapi: host: "{{ inventory_hostname }}" transport: eapi authorize: yes auth_pass: !vault... proxy_env: http_proxy: http://proxy.example.com:8080 eAPI task: .. code-block:: yaml - name: Backup current switch config (eos) eos_config: backup: yes provider: "{{ eapi }}" register: backup_eos_location environment: "{{ proxy_env }}" when: ansible_network_os == 'eos' In this example two variables defined in ``group_vars`` get passed to the module of the task: - the ``eapi`` variable gets passed to the ``provider`` option of the module - the ``proxy_env`` variable gets passed to the ``environment`` option of the module .. include:: shared_snippets/SSH_warning.txt .. seealso:: :ref:`timeout_options`
closed
ansible/ansible
https://github.com/ansible/ansible
69,374
Update Ansible-maintained scenario guides to use FQCN
<!--- Verify first that your improvement is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below, add suggestions to wording or structure --> - [x] Update to use FQCN - [x] Add link to collection on Galaxy <!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? --> ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME <!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure --> docs.ansible.com ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. OS version, browser, etc. --> ##### ADDITIONAL INFORMATION <!--- Describe how this improves the documentation, e.g. before/after situation or screenshots --> <!--- HINT: You can paste gist.github.com links for larger files -->
https://github.com/ansible/ansible/issues/69374
https://github.com/ansible/ansible/pull/70369
ae1291004e2d6d127f70a2588012d5b1342d9dee
172230d7b80c8565c4d9d6b6a8b301932b6785c0
2020-05-07T15:57:08Z
python
2020-08-10T20:09:35Z
docs/docsite/rst/network/user_guide/platform_frr.rst
.. _frr_platform_options: *************************************** FRR Platform Options *************************************** This page offers details on connection options to manage FRR using Ansible. .. contents:: Topics Connections Available ================================================================================ .. table:: :class: documentation-table ==================== ========================================== .. CLI ==================== ========================================== Protocol SSH Credentials uses SSH keys / SSH-agent if present accepts ``-u myuser -k`` if using password Indirect Access via a bastion (jump host) Connection Settings ``ansible_connection: network_cli`` |enable_mode| not supported Returned Data Format ``stdout[0].`` ==================== ========================================== .. |enable_mode| replace:: Enable Mode |br| (Privilege Escalation) Using CLI in Ansible ==================== Example CLI ``group_vars/frr.yml`` ---------------------------------- .. code-block:: yaml ansible_connection: network_cli ansible_network_os: frr ansible_user: frruser ansible_password: !vault... ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"' - The `ansible_user` should be a part of the `frrvty` group and should have the default shell set to `/bin/vtysh`. - If you are using SSH keys (including an ssh-agent) you can remove the ``ansible_password`` configuration. - If you are accessing your host directly (not through a bastion/jump host) you can remove the ``ansible_ssh_common_args`` configuration. - If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the ``ProxyCommand`` directive. To prevent secrets from leaking out (for example in ``ps`` output), SSH does not support providing passwords via environment variables. Example CLI Task ---------------- .. code-block:: yaml - name: Gather FRR facts frr_facts: gather_subset: - config - hardware .. include:: shared_snippets/SSH_warning.txt .. seealso:: :ref:`timeout_options`
closed
ansible/ansible
https://github.com/ansible/ansible
69,374
Update Ansible-maintained scenario guides to use FQCN
<!--- Verify first that your improvement is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below, add suggestions to wording or structure --> - [x] Update to use FQCN - [x] Add link to collection on Galaxy <!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? --> ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME <!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure --> docs.ansible.com ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. OS version, browser, etc. --> ##### ADDITIONAL INFORMATION <!--- Describe how this improves the documentation, e.g. before/after situation or screenshots --> <!--- HINT: You can paste gist.github.com links for larger files -->
https://github.com/ansible/ansible/issues/69374
https://github.com/ansible/ansible/pull/70369
ae1291004e2d6d127f70a2588012d5b1342d9dee
172230d7b80c8565c4d9d6b6a8b301932b6785c0
2020-05-07T15:57:08Z
python
2020-08-10T20:09:35Z
docs/docsite/rst/network/user_guide/platform_ios.rst
.. _ios_platform_options: *************************************** IOS Platform Options *************************************** IOS supports Enable Mode (Privilege Escalation). This page offers details on how to use Enable Mode on IOS in Ansible. .. contents:: Topics Connections Available ================================================================================ .. table:: :class: documentation-table ==================== ========================================== .. CLI ==================== ========================================== Protocol SSH Credentials uses SSH keys / SSH-agent if present accepts ``-u myuser -k`` if using password Indirect Access via a bastion (jump host) Connection Settings ``ansible_connection: network_cli`` |enable_mode| supported: use ``ansible_become: yes`` with ``ansible_become_method: enable`` and ``ansible_become_password:`` Returned Data Format ``stdout[0].`` ==================== ========================================== .. |enable_mode| replace:: Enable Mode |br| (Privilege Escalation) For legacy playbooks, IOS still supports ``ansible_connection: local``. We recommend modernizing to use ``ansible_connection: network_cli`` as soon as possible. Using CLI in Ansible ==================== Example CLI ``group_vars/ios.yml`` ---------------------------------- .. code-block:: yaml ansible_connection: network_cli ansible_network_os: ios ansible_user: myuser ansible_password: !vault... ansible_become: yes ansible_become_method: enable ansible_become_password: !vault... ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"' - If you are using SSH keys (including an ssh-agent) you can remove the ``ansible_password`` configuration. - If you are accessing your host directly (not through a bastion/jump host) you can remove the ``ansible_ssh_common_args`` configuration. - If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the ``ProxyCommand`` directive. To prevent secrets from leaking out (for example in ``ps`` output), SSH does not support providing passwords via environment variables. Example CLI Task ---------------- .. code-block:: yaml - name: Backup current switch config (ios) ios_config: backup: yes register: backup_ios_location when: ansible_network_os == 'ios' .. include:: shared_snippets/SSH_warning.txt .. seealso:: :ref:`timeout_options`
closed
ansible/ansible
https://github.com/ansible/ansible
69,374
Update Ansible-maintained scenario guides to use FQCN
<!--- Verify first that your improvement is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below, add suggestions to wording or structure --> - [x] Update to use FQCN - [x] Add link to collection on Galaxy <!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? --> ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME <!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure --> docs.ansible.com ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. OS version, browser, etc. --> ##### ADDITIONAL INFORMATION <!--- Describe how this improves the documentation, e.g. before/after situation or screenshots --> <!--- HINT: You can paste gist.github.com links for larger files -->
https://github.com/ansible/ansible/issues/69374
https://github.com/ansible/ansible/pull/70369
ae1291004e2d6d127f70a2588012d5b1342d9dee
172230d7b80c8565c4d9d6b6a8b301932b6785c0
2020-05-07T15:57:08Z
python
2020-08-10T20:09:35Z
docs/docsite/rst/network/user_guide/platform_iosxr.rst
.. _iosxr_platform_options: *************************************** IOS-XR Platform Options *************************************** IOS-XR supports multiple connections. This page offers details on how each connection works in Ansible and how to use it. .. contents:: Topic Connections Available ================================================================================ .. table:: :class: documentation-table ==================== ========================================== ========================= .. CLI NETCONF only for modules ``iosxr_banner``, ``iosxr_interface``, ``iosxr_logging``, ``iosxr_system``, ``iosxr_user`` ==================== ========================================== ========================= Protocol SSH XML over SSH Credentials uses SSH keys / SSH-agent if present uses SSH keys / SSH-agent if present accepts ``-u myuser -k`` if using password accepts ``-u myuser -k`` if using password Indirect Access via a bastion (jump host) via a bastion (jump host) Connection Settings ``ansible_connection: network_cli`` ``ansible_connection: netconf`` |enable_mode| not supported not supported Returned Data Format Refer to individual module documentation Refer to individual module documentation ==================== ========================================== ========================= .. |enable_mode| replace:: Enable Mode |br| (Privilege Escalation) For legacy playbooks, Ansible still supports ``ansible_connection=local`` on all IOS-XR modules. We recommend modernizing to use ``ansible_connection=netconf`` or ``ansible_connection=network_cli`` as soon as possible. Using CLI in Ansible ==================== Example CLI inventory ``[iosxr:vars]`` -------------------------------------- .. code-block:: yaml [iosxr:vars] ansible_connection=network_cli ansible_network_os=iosxr ansible_user=myuser ansible_password=!vault... ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p -q bastion01"' - If you are using SSH keys (including an ssh-agent) you can remove the ``ansible_password`` configuration. - If you are accessing your host directly (not through a bastion/jump host) you can remove the ``ansible_ssh_common_args`` configuration. - If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the ``ProxyCommand`` directive. To prevent secrets from leaking out (for example in ``ps`` output), SSH does not support providing passwords via environment variables. Example CLI Task ---------------- .. code-block:: yaml - name: Retrieve IOS-XR version iosxr_command: commands: show version when: ansible_network_os == 'iosxr' Using NETCONF in Ansible ======================== Enabling NETCONF ---------------- Before you can use NETCONF to connect to a switch, you must: - install the ``ncclient`` python package on your control node(s) with ``pip install ncclient`` - enable NETCONF on the Cisco IOS-XR device(s) To enable NETCONF on a new switch via Ansible, use the ``iosxr_netconf`` module via the CLI connection. Set up your platform-level variables just like in the CLI example above, then run a playbook task like this: .. code-block:: yaml - name: Enable NETCONF connection: network_cli iosxr_netconf: when: ansible_network_os == 'iosxr' Once NETCONF is enabled, change your variables to use the NETCONF connection. Example NETCONF inventory ``[iosxr:vars]`` ------------------------------------------ .. code-block:: yaml [iosxr:vars] ansible_connection=netconf ansible_network_os=iosxr ansible_user=myuser ansible_password=!vault | ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p -q bastion01"' Example NETCONF Task -------------------- .. code-block:: yaml - name: Configure hostname and domain-name iosxr_system: hostname: iosxr01 domain_name: test.example.com domain_search: - ansible.com - redhat.com - cisco.com .. include:: shared_snippets/SSH_warning.txt .. seealso:: :ref:`timeout_options`
closed
ansible/ansible
https://github.com/ansible/ansible
69,374
Update Ansible-maintained scenario guides to use FQCN
<!--- Verify first that your improvement is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below, add suggestions to wording or structure --> - [x] Update to use FQCN - [x] Add link to collection on Galaxy <!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? --> ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME <!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure --> docs.ansible.com ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. OS version, browser, etc. --> ##### ADDITIONAL INFORMATION <!--- Describe how this improves the documentation, e.g. before/after situation or screenshots --> <!--- HINT: You can paste gist.github.com links for larger files -->
https://github.com/ansible/ansible/issues/69374
https://github.com/ansible/ansible/pull/70369
ae1291004e2d6d127f70a2588012d5b1342d9dee
172230d7b80c8565c4d9d6b6a8b301932b6785c0
2020-05-07T15:57:08Z
python
2020-08-10T20:09:35Z
docs/docsite/rst/network/user_guide/platform_junos.rst
.. _junos_platform_options: *************************************** Junos OS Platform Options *************************************** Juniper Junos OS supports multiple connections. This page offers details on how each connection works in Ansible and how to use it. .. contents:: Topics Connections Available ================================================================================ .. table:: :class: documentation-table ==================== ========================================== ========================= .. CLI NETCONF ``junos_netconf`` & ``junos_command`` all modules except ``junos_netconf``, modules only which enables NETCONF ==================== ========================================== ========================= Protocol SSH XML over SSH Credentials uses SSH keys / SSH-agent if present uses SSH keys / SSH-agent if present accepts ``-u myuser -k`` if using password accepts ``-u myuser -k`` if using password Indirect Access via a bastion (jump host) via a bastion (jump host) Connection Settings ``ansible_connection: network_cli`` ``ansible_connection: netconf`` |enable_mode| not supported by Junos OS not supported by Junos OS Returned Data Format ``stdout[0].`` * json: ``result[0]['software-information'][0]['host-name'][0]['data'] foo lo0`` * text: ``result[1].interface-information[0].physical-interface[0].name[0].data foo lo0`` * xml: ``result[1].rpc-reply.interface-information[0].physical-interface[0].name[0].data foo lo0`` ==================== ========================================== ========================= .. |enable_mode| replace:: Enable Mode |br| (Privilege Escalation) For legacy playbooks, Ansible still supports ``ansible_connection=local`` on all JUNOS modules. We recommend modernizing to use ``ansible_connection=netconf`` or ``ansible_connection=network_cli`` as soon as possible. Using CLI in Ansible ==================== Example CLI inventory ``[junos:vars]`` -------------------------------------- .. code-block:: yaml [junos:vars] ansible_connection=network_cli ansible_network_os=junos ansible_user=myuser ansible_password=!vault... ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p -q bastion01"' - If you are using SSH keys (including an ssh-agent) you can remove the ``ansible_password`` configuration. - If you are accessing your host directly (not through a bastion/jump host) you can remove the ``ansible_ssh_common_args`` configuration. - If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the ``ProxyCommand`` directive. To prevent secrets from leaking out (for example in ``ps`` output), SSH does not support providing passwords via environment variables. Example CLI Task ---------------- .. code-block:: yaml - name: Retrieve Junos OS version junos_command: commands: show version when: ansible_network_os == 'junos' Using NETCONF in Ansible ======================== Enabling NETCONF ---------------- Before you can use NETCONF to connect to a switch, you must: - install the ``ncclient`` python package on your control node(s) with ``pip install ncclient`` - enable NETCONF on the Junos OS device(s) To enable NETCONF on a new switch via Ansible, use the ``junos_netconf`` module via the CLI connection. Set up your platform-level variables just like in the CLI example above, then run a playbook task like this: .. code-block:: yaml - name: Enable NETCONF connection: network_cli junos_netconf: when: ansible_network_os == 'junos' Once NETCONF is enabled, change your variables to use the NETCONF connection. Example NETCONF inventory ``[junos:vars]`` ------------------------------------------ .. code-block:: yaml [junos:vars] ansible_connection=netconf ansible_network_os=junos ansible_user=myuser ansible_password=!vault | ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p -q bastion01"' Example NETCONF Task -------------------- .. code-block:: yaml - name: Backup current switch config (junos) junos_config: backup: yes register: backup_junos_location when: ansible_network_os == 'junos' .. include:: shared_snippets/SSH_warning.txt .. seealso:: :ref:`timeout_options`
closed
ansible/ansible
https://github.com/ansible/ansible
69,374
Update Ansible-maintained scenario guides to use FQCN
<!--- Verify first that your improvement is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below, add suggestions to wording or structure --> - [x] Update to use FQCN - [x] Add link to collection on Galaxy <!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? --> ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME <!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure --> docs.ansible.com ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. OS version, browser, etc. --> ##### ADDITIONAL INFORMATION <!--- Describe how this improves the documentation, e.g. before/after situation or screenshots --> <!--- HINT: You can paste gist.github.com links for larger files -->
https://github.com/ansible/ansible/issues/69374
https://github.com/ansible/ansible/pull/70369
ae1291004e2d6d127f70a2588012d5b1342d9dee
172230d7b80c8565c4d9d6b6a8b301932b6785c0
2020-05-07T15:57:08Z
python
2020-08-10T20:09:35Z
docs/docsite/rst/network/user_guide/platform_netconf_enabled.rst
.. _netconf_enabled_platform_options: *************************************** Netconf enabled Platform Options *************************************** This page offers details on how the netconf connection works in Ansible and how to use it. .. contents:: Topics Connections Available ================================================================================ .. table:: :class: documentation-table ==================== ========================================== .. NETCONF all modules except ``junos_netconf``, which enables NETCONF ==================== ========================================== Protocol XML over SSH Credentials uses SSH keys / SSH-agent if present accepts ``-u myuser -k`` if using password Indirect Access via a bastion (jump host) Connection Settings ``ansible_connection: netconf`` ==================== ========================================== For legacy playbooks, Ansible still supports ``ansible_connection=local`` for the netconf_config module only. We recommend modernizing to use ``ansible_connection=netconf`` as soon as possible. Using NETCONF in Ansible ======================== Enabling NETCONF ---------------- Before you can use NETCONF to connect to a switch, you must: - install the ``ncclient`` Python package on your control node(s) with ``pip install ncclient`` - enable NETCONF on the Junos OS device(s) To enable NETCONF on a new switch via Ansible, use the platform specific module via the CLI connection or set it manually. For example set up your platform-level variables just like in the CLI example above, then run a playbook task like this: .. code-block:: yaml - name: Enable NETCONF connection: network_cli junos_netconf: when: ansible_network_os == 'junos' Once NETCONF is enabled, change your variables to use the NETCONF connection. Example NETCONF inventory ``[junos:vars]`` ------------------------------------------ .. code-block:: yaml [junos:vars] ansible_connection=netconf ansible_network_os=junos ansible_user=myuser ansible_password=!vault | Example NETCONF Task -------------------- .. code-block:: yaml - name: Backup current switch config netconf_config: backup: yes register: backup_junos_location Example NETCONF Task with configurable variables ------------------------------------------------ .. code-block:: yaml - name: configure interface while providing different private key file path netconf_config: backup: yes register: backup_junos_location vars: ansible_private_key_file: /home/admin/.ssh/newprivatekeyfile Note: For netconf connection plugin configurable variables see :ref:`netconf <netconf_connection>`. Bastion/Jumphost Configuration ------------------------------ To use a jump host to connect to a NETCONF enabled device you must set the ``ANSIBLE_NETCONF_SSH_CONFIG`` environment variable. ``ANSIBLE_NETCONF_SSH_CONFIG`` can be set to either: - 1 or TRUE (to trigger the use of the default SSH config file ~/.ssh/config) - The absolute path to a custom SSH config file. The SSH config file should look something like: .. code-block:: ini Host * proxycommand ssh -o StrictHostKeyChecking=no -W %h:%p [email protected] StrictHostKeyChecking no Authentication for the jump host must use key based authentication. You can either specify the private key used in the SSH config file: .. code-block:: ini IdentityFile "/absolute/path/to/private-key.pem" Or you can use an ssh-agent. ansible_network_os auto-detection --------------------------------- If ``ansible_network_os`` is not specified for a host, then Ansible will attempt to automatically detect what ``network_os`` plugin to use. ``ansible_network_os`` auto-detection can also be triggered by using ``auto`` as the ``ansible_network_os``. (Note: Previously ``default`` was used instead of ``auto``). .. include:: shared_snippets/SSH_warning.txt .. seealso:: :ref:`timeout_options`
closed
ansible/ansible
https://github.com/ansible/ansible
69,374
Update Ansible-maintained scenario guides to use FQCN
<!--- Verify first that your improvement is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below, add suggestions to wording or structure --> - [x] Update to use FQCN - [x] Add link to collection on Galaxy <!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? --> ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME <!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure --> docs.ansible.com ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. OS version, browser, etc. --> ##### ADDITIONAL INFORMATION <!--- Describe how this improves the documentation, e.g. before/after situation or screenshots --> <!--- HINT: You can paste gist.github.com links for larger files -->
https://github.com/ansible/ansible/issues/69374
https://github.com/ansible/ansible/pull/70369
ae1291004e2d6d127f70a2588012d5b1342d9dee
172230d7b80c8565c4d9d6b6a8b301932b6785c0
2020-05-07T15:57:08Z
python
2020-08-10T20:09:35Z
docs/docsite/rst/network/user_guide/platform_nxos.rst
.. _nxos_platform_options: *************************************** NXOS Platform Options *************************************** Cisco NXOS supports multiple connections. This page offers details on how each connection works in Ansible and how to use it. .. contents:: Topics Connections Available ================================================================================ .. table:: :class: documentation-table ==================== ========================================== ========================= .. CLI NX-API ==================== ========================================== ========================= Protocol SSH HTTP(S) Credentials uses SSH keys / SSH-agent if present uses HTTPS certificates if present accepts ``-u myuser -k`` if using password Indirect Access via a bastion (jump host) via a web proxy Connection Settings ``ansible_connection: network_cli`` ``ansible_connection: httpapi`` OR ``ansible_connection: local`` with ``transport: nxapi`` in the ``provider`` dictionary |enable_mode| supported: use ``ansible_become: yes`` not supported by NX-API with ``ansible_become_method: enable`` and ``ansible_become_password:`` Returned Data Format ``stdout[0].`` ``stdout[0].messages[0].`` ==================== ========================================== ========================= .. |enable_mode| replace:: Enable Mode |br| (Privilege Escalation) |br| supported as of 2.5.3 For legacy playbooks, NXOS still supports ``ansible_connection: local``. We recommend modernizing to use ``ansible_connection: network_cli`` or ``ansible_connection: httpapi`` as soon as possible. Using CLI in Ansible ==================== Example CLI ``group_vars/nxos.yml`` ----------------------------------- .. code-block:: yaml ansible_connection: network_cli ansible_network_os: nxos ansible_user: myuser ansible_password: !vault... ansible_become: yes ansible_become_method: enable ansible_become_password: !vault... ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"' - If you are using SSH keys (including an ssh-agent) you can remove the ``ansible_password`` configuration. - If you are accessing your host directly (not through a bastion/jump host) you can remove the ``ansible_ssh_common_args`` configuration. - If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the ``ProxyCommand`` directive. To prevent secrets from leaking out (for example in ``ps`` output), SSH does not support providing passwords via environment variables. Example CLI Task ---------------- .. code-block:: yaml - name: Backup current switch config (nxos) nxos_config: backup: yes register: backup_nxos_location when: ansible_network_os == 'nxos' Using NX-API in Ansible ======================= Enabling NX-API --------------- Before you can use NX-API to connect to a switch, you must enable NX-API. To enable NX-API on a new switch via Ansible, use the ``nxos_nxapi`` module via the CLI connection. Set up group_vars/nxos.yml just like in the CLI example above, then run a playbook task like this: .. code-block:: yaml - name: Enable NX-API nxos_nxapi: enable_http: yes enable_https: yes when: ansible_network_os == 'nxos' To find out more about the options for enabling HTTP/HTTPS and local http see the :ref:`nxos_nxapi <nxos_nxapi_module>` module documentation. Once NX-API is enabled, change your ``group_vars/nxos.yml`` to use the NX-API connection. Example NX-API ``group_vars/nxos.yml`` -------------------------------------- .. code-block:: yaml ansible_connection: httpapi ansible_network_os: nxos ansible_user: myuser ansible_password: !vault... proxy_env: http_proxy: http://proxy.example.com:8080 - If you are accessing your host directly (not through a web proxy) you can remove the ``proxy_env`` configuration. - If you are accessing your host through a web proxy using ``https``, change ``http_proxy`` to ``https_proxy``. Example NX-API Task ------------------- .. code-block:: yaml - name: Backup current switch config (nxos) nxos_config: backup: yes register: backup_nxos_location environment: "{{ proxy_env }}" when: ansible_network_os == 'nxos' In this example the ``proxy_env`` variable defined in ``group_vars`` gets passed to the ``environment`` option of the module used in the task. .. include:: shared_snippets/SSH_warning.txt Cisco Nexus Platform Support Matrix =================================== The following platforms and software versions have been certified by Cisco to work with this version of Ansible. .. table:: Platform / Software Minimum Requirements :align: center =================== ===================== Supported Platforms Minimum NX-OS Version =================== ===================== Cisco Nexus N3k 7.0(3)I2(5) and later Cisco Nexus N9k 7.0(3)I2(5) and later Cisco Nexus N5k 7.3(0)N1(1) and later Cisco Nexus N6k 7.3(0)N1(1) and later Cisco Nexus N7k 7.3(0)D1(1) and later =================== ===================== .. table:: Platform Models :align: center ======== ============================================== Platform Description ======== ============================================== N3k Support includes N30xx, N31xx and N35xx models N5k Support includes all N5xxx models N6k Support includes all N6xxx models N7k Support includes all N7xxx models N9k Support includes all N9xxx models ======== ============================================== .. seealso:: :ref:`timeout_options`
closed
ansible/ansible
https://github.com/ansible/ansible
69,374
Update Ansible-maintained scenario guides to use FQCN
<!--- Verify first that your improvement is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below, add suggestions to wording or structure --> - [x] Update to use FQCN - [x] Add link to collection on Galaxy <!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? --> ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME <!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure --> docs.ansible.com ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. OS version, browser, etc. --> ##### ADDITIONAL INFORMATION <!--- Describe how this improves the documentation, e.g. before/after situation or screenshots --> <!--- HINT: You can paste gist.github.com links for larger files -->
https://github.com/ansible/ansible/issues/69374
https://github.com/ansible/ansible/pull/70369
ae1291004e2d6d127f70a2588012d5b1342d9dee
172230d7b80c8565c4d9d6b6a8b301932b6785c0
2020-05-07T15:57:08Z
python
2020-08-10T20:09:35Z
docs/docsite/rst/network/user_guide/platform_vyos.rst
.. _vyos_platform_options: *************************************** VyOS Platform Options *************************************** This page offers details on connection options to manage VyOS using Ansible. .. contents:: Topics Connections Available ================================================================================ .. table:: :class: documentation-table ==================== ========================================== .. CLI ==================== ========================================== Protocol SSH Credentials uses SSH keys / SSH-agent if present accepts ``-u myuser -k`` if using password Indirect Access via a bastion (jump host) Connection Settings ``ansible_connection: network_cli`` |enable_mode| not supported Returned Data Format Refer to individual module documentation ==================== ========================================== .. |enable_mode| replace:: Enable Mode |br| (Privilege Escalation) For legacy playbooks, VyOS still supports ``ansible_connection: local``. We recommend modernizing to use ``ansible_connection: network_cli`` as soon as possible. Using CLI in Ansible ==================== Example CLI ``group_vars/vyos.yml`` ----------------------------------- .. code-block:: yaml ansible_connection: network_cli ansible_network_os: vyos ansible_user: myuser ansible_password: !vault... ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"' - If you are using SSH keys (including an ssh-agent) you can remove the ``ansible_password`` configuration. - If you are accessing your host directly (not through a bastion/jump host) you can remove the ``ansible_ssh_common_args`` configuration. - If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the ``ProxyCommand`` directive. To prevent secrets from leaking out (for example in ``ps`` output), SSH does not support providing passwords via environment variables. Example CLI Task ---------------- .. code-block:: yaml - name: Retrieve VyOS version info vyos_command: commands: show version when: ansible_network_os == 'vyos' .. include:: shared_snippets/SSH_warning.txt .. seealso:: :ref:`timeout_options`
closed
ansible/ansible
https://github.com/ansible/ansible
67,003
TOML inventory examples in documentation are confusing
##### SUMMARY When browsing the [toml inventory source](https://docs.ansible.com/ansible/latest/plugins/inventory/toml.html) documentation, I found the examples included on that page to be confusing. It looks like there should be three separate example TOML files, but instead, there's one code listing, with three YAML parameters, like: ```yaml example1: | toml here... example2: | toml here... example3: | toml here... ``` This makes it confusing as to how I'm supposed to format my toml-based inventory, because it looks like I can define multiple TOML inventories in one YAML file as arbitrary strings, or something along those lines. ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME toml ##### ANSIBLE VERSION N/A ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### ADDITIONAL INFORMATION N/A
https://github.com/ansible/ansible/issues/67003
https://github.com/ansible/ansible/pull/71180
172230d7b80c8565c4d9d6b6a8b301932b6785c0
edac065bd2ad3c613413c125cad3eee45e5f0835
2020-01-31T21:35:17Z
python
2020-08-10T20:31:26Z
lib/ansible/plugins/inventory/toml.py
# Copyright (c) 2018 Matt Martz <[email protected]> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type DOCUMENTATION = ''' inventory: toml version_added: "2.8" short_description: Uses a specific TOML file as an inventory source. description: - TOML based inventory format - File MUST have a valid '.toml' file extension notes: - Requires the 'toml' python library ''' EXAMPLES = ''' example1: | [all.vars] has_java = false [web] children = [ "apache", "nginx" ] vars = { http_port = 8080, myvar = 23 } [web.hosts] host1 = {} host2 = { ansible_port = 222 } [apache.hosts] tomcat1 = {} tomcat2 = { myvar = 34 } tomcat3 = { mysecret = "03#pa33w0rd" } [nginx.hosts] jenkins1 = {} [nginx.vars] has_java = true example2: | [all.vars] has_java = false [web] children = [ "apache", "nginx" ] [web.vars] http_port = 8080 myvar = 23 [web.hosts.host1] [web.hosts.host2] ansible_port = 222 [apache.hosts.tomcat1] [apache.hosts.tomcat2] myvar = 34 [apache.hosts.tomcat3] mysecret = "03#pa33w0rd" [nginx.hosts.jenkins1] [nginx.vars] has_java = true example3: | [ungrouped.hosts] host1 = {} host2 = { ansible_host = "127.0.0.1", ansible_port = 44 } host3 = { ansible_host = "127.0.0.1", ansible_port = 45 } [g1.hosts] host4 = {} [g2.hosts] host4 = {} ''' import os from functools import partial from ansible.errors import AnsibleFileNotFound, AnsibleParserError from ansible.module_utils._text import to_bytes, to_native, to_text from ansible.module_utils.common._collections_compat import MutableMapping, MutableSequence from ansible.module_utils.six import string_types, text_type from ansible.parsing.yaml.objects import AnsibleSequence, AnsibleUnicode from ansible.plugins.inventory import BaseFileInventoryPlugin from ansible.utils.display import Display try: import toml HAS_TOML = True except ImportError: HAS_TOML = False display = Display() if HAS_TOML and hasattr(toml, 'TomlEncoder'): class AnsibleTomlEncoder(toml.TomlEncoder): def __init__(self, *args, **kwargs): super(AnsibleTomlEncoder, self).__init__(*args, **kwargs) # Map our custom YAML object types to dump_funcs from ``toml`` self.dump_funcs.update({ AnsibleSequence: self.dump_funcs.get(list), AnsibleUnicode: self.dump_funcs.get(str), }) toml_dumps = partial(toml.dumps, encoder=AnsibleTomlEncoder()) else: def toml_dumps(data): return toml.dumps(convert_yaml_objects_to_native(data)) def convert_yaml_objects_to_native(obj): """Older versions of the ``toml`` python library, don't have a pluggable way to tell the encoder about custom types, so we need to ensure objects that we pass are native types. Only used on ``toml<0.10.0`` where ``toml.TomlEncoder`` is missing. This function recurses an object and ensures we cast any of the types from ``ansible.parsing.yaml.objects`` into their native types, effectively cleansing the data before we hand it over to ``toml`` This function doesn't directly check for the types from ``ansible.parsing.yaml.objects`` but instead checks for the types those objects inherit from, to offer more flexibility. """ if isinstance(obj, dict): return dict((k, convert_yaml_objects_to_native(v)) for k, v in obj.items()) elif isinstance(obj, list): return [convert_yaml_objects_to_native(v) for v in obj] elif isinstance(obj, text_type): return text_type(obj) else: return obj class InventoryModule(BaseFileInventoryPlugin): NAME = 'toml' def _parse_group(self, group, group_data): if not isinstance(group_data, (MutableMapping, type(None))): self.display.warning("Skipping '%s' as this is not a valid group definition" % group) return group = self.inventory.add_group(group) if group_data is None: return for key, data in group_data.items(): if key == 'vars': if not isinstance(data, MutableMapping): raise AnsibleParserError( 'Invalid "vars" entry for "%s" group, requires a dict, found "%s" instead.' % (group, type(data)) ) for var, value in data.items(): self.inventory.set_variable(group, var, value) elif key == 'children': if not isinstance(data, MutableSequence): raise AnsibleParserError( 'Invalid "children" entry for "%s" group, requires a list, found "%s" instead.' % (group, type(data)) ) for subgroup in data: self._parse_group(subgroup, {}) self.inventory.add_child(group, subgroup) elif key == 'hosts': if not isinstance(data, MutableMapping): raise AnsibleParserError( 'Invalid "hosts" entry for "%s" group, requires a dict, found "%s" instead.' % (group, type(data)) ) for host_pattern, value in data.items(): hosts, port = self._expand_hostpattern(host_pattern) self._populate_host_vars(hosts, value, group, port) else: self.display.warning( 'Skipping unexpected key "%s" in group "%s", only "vars", "children" and "hosts" are valid' % (key, group) ) def _load_file(self, file_name): if not file_name or not isinstance(file_name, string_types): raise AnsibleParserError("Invalid filename: '%s'" % to_native(file_name)) b_file_name = to_bytes(self.loader.path_dwim(file_name)) if not self.loader.path_exists(b_file_name): raise AnsibleFileNotFound("Unable to retrieve file contents", file_name=file_name) try: (b_data, private) = self.loader._get_file_contents(file_name) return toml.loads(to_text(b_data, errors='surrogate_or_strict')) except toml.TomlDecodeError as e: raise AnsibleParserError( 'TOML file (%s) is invalid: %s' % (file_name, to_native(e)), orig_exc=e ) except (IOError, OSError) as e: raise AnsibleParserError( "An error occurred while trying to read the file '%s': %s" % (file_name, to_native(e)), orig_exc=e ) except Exception as e: raise AnsibleParserError( "An unexpected error occurred while parsing the file '%s': %s" % (file_name, to_native(e)), orig_exc=e ) def parse(self, inventory, loader, path, cache=True): ''' parses the inventory file ''' if not HAS_TOML: raise AnsibleParserError( 'The TOML inventory plugin requires the python "toml" library' ) super(InventoryModule, self).parse(inventory, loader, path) self.set_options() try: data = self._load_file(path) except Exception as e: raise AnsibleParserError(e) if not data: raise AnsibleParserError('Parsed empty TOML file') elif data.get('plugin'): raise AnsibleParserError('Plugin configuration TOML file, not TOML inventory') for group_name in data: self._parse_group(group_name, data[group_name]) def verify_file(self, path): if super(InventoryModule, self).verify_file(path): file_name, ext = os.path.splitext(path) if ext == '.toml': return True return False
closed
ansible/ansible
https://github.com/ansible/ansible
70,831
keyed_groups with native types recasts integer strings back to integers, errors
##### SUMMARY Example here is AWS account ID, which is clearly an integer-like thing. The keyed_group templating logic will error if it gets a type other than a string... but sometime after Ansible 2.9, it seems that I _can't_ cast it to a string ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lib/ansible/plugins/inventory/__init__.py ##### ANSIBLE VERSION ``` $ ansible --version [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible 2.11.0.dev0 config file = None configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/awx_collection/bin/ansible python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] ``` ##### CONFIGURATION Defaults ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE Inventory file `aws_ec2.yml` ```yaml compose: ec2_account_id: owner_id keyed_groups: - key: ec2_account_id | string parent_group: accounts prefix: '' separator: '' plugin: amazon.aws.aws_ec2 ``` Command: ``` AWS_ACCESS_KEY_ID=<redacted> AWS_SECRET_ACCESS_KEY=<redacted> ANSIBLE_JINJA2_NATIVE=True ansible-inventory -i testing/awx_424_0bzcwopc/aws_ec2.yml --list --export -vvv ``` ##### EXPECTED RESULTS Behavior in 2.9 is that it gives JSON data for the inventory. I think it also needs to be stated that input variables look like this: ``` "ec2_account_id": "123456789", "owner_id": "123456789", ``` ##### ACTUAL RESULTS ``` [WARNING]: * Failed to parse /Users/alancoding/Documents/tower/testing/awx_424_0bzcwopc/aws_ec2.yml with auto plugin: Invalid group name format, expected a string or a list of them or dictionary, got: <class 'int'> File "/Users/alancoding/Documents/repos/ansible/lib/ansible/inventory/manager.py", line 289, in parse_source plugin.parse(self._inventory, self._loader, source, cache=cache) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/inventory/auto.py", line 58, in parse plugin.parse(inventory, loader, path, cache=cache) File "/Users/alancoding/.ansible/collections/ansible_collections/amazon/aws/plugins/inventory/aws_ec2.py", line 644, in parse self._populate(results, hostnames) File "/Users/alancoding/.ansible/collections/ansible_collections/amazon/aws/plugins/inventory/aws_ec2.py", line 534, in _populate self._add_hosts(hosts=groups[group], group=group, hostnames=hostnames) File "/Users/alancoding/.ansible/collections/ansible_collections/amazon/aws/plugins/inventory/aws_ec2.py", line 569, in _add_hosts self._add_host_to_keyed_groups(self.get_option('keyed_groups'), host, hostname, strict=strict) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/inventory/__init__.py", line 431, in _add_host_to_keyed_groups raise AnsibleParserError("Invalid group name format, expected a string or a list of them or dictionary, got: %s" % type(key)) ``` No matter what I do, it doesn't seem to use this as a string. Presumably, this may be something that can be reproduced with the constructed inventory plugin by itself, but I have not gotten that far yet. So something is forcing this into an integer, and it wasn't there is 2.9. That's about the extent of what I know at this point.
https://github.com/ansible/ansible/issues/70831
https://github.com/ansible/ansible/pull/70988
7195788ffe8ad63c8d9e36f2bc896c96ddfdaa49
b66d66027ece03f3f0a3fdb5fd6b8213965a2f1d
2020-07-23T03:01:56Z
python
2020-08-11T08:19:49Z
changelogs/fragments/70831-skip-literal_eval-string-filter-native-jinja.yml
closed
ansible/ansible
https://github.com/ansible/ansible
70,831
keyed_groups with native types recasts integer strings back to integers, errors
##### SUMMARY Example here is AWS account ID, which is clearly an integer-like thing. The keyed_group templating logic will error if it gets a type other than a string... but sometime after Ansible 2.9, it seems that I _can't_ cast it to a string ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lib/ansible/plugins/inventory/__init__.py ##### ANSIBLE VERSION ``` $ ansible --version [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible 2.11.0.dev0 config file = None configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/awx_collection/bin/ansible python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] ``` ##### CONFIGURATION Defaults ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE Inventory file `aws_ec2.yml` ```yaml compose: ec2_account_id: owner_id keyed_groups: - key: ec2_account_id | string parent_group: accounts prefix: '' separator: '' plugin: amazon.aws.aws_ec2 ``` Command: ``` AWS_ACCESS_KEY_ID=<redacted> AWS_SECRET_ACCESS_KEY=<redacted> ANSIBLE_JINJA2_NATIVE=True ansible-inventory -i testing/awx_424_0bzcwopc/aws_ec2.yml --list --export -vvv ``` ##### EXPECTED RESULTS Behavior in 2.9 is that it gives JSON data for the inventory. I think it also needs to be stated that input variables look like this: ``` "ec2_account_id": "123456789", "owner_id": "123456789", ``` ##### ACTUAL RESULTS ``` [WARNING]: * Failed to parse /Users/alancoding/Documents/tower/testing/awx_424_0bzcwopc/aws_ec2.yml with auto plugin: Invalid group name format, expected a string or a list of them or dictionary, got: <class 'int'> File "/Users/alancoding/Documents/repos/ansible/lib/ansible/inventory/manager.py", line 289, in parse_source plugin.parse(self._inventory, self._loader, source, cache=cache) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/inventory/auto.py", line 58, in parse plugin.parse(inventory, loader, path, cache=cache) File "/Users/alancoding/.ansible/collections/ansible_collections/amazon/aws/plugins/inventory/aws_ec2.py", line 644, in parse self._populate(results, hostnames) File "/Users/alancoding/.ansible/collections/ansible_collections/amazon/aws/plugins/inventory/aws_ec2.py", line 534, in _populate self._add_hosts(hosts=groups[group], group=group, hostnames=hostnames) File "/Users/alancoding/.ansible/collections/ansible_collections/amazon/aws/plugins/inventory/aws_ec2.py", line 569, in _add_hosts self._add_host_to_keyed_groups(self.get_option('keyed_groups'), host, hostname, strict=strict) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/inventory/__init__.py", line 431, in _add_host_to_keyed_groups raise AnsibleParserError("Invalid group name format, expected a string or a list of them or dictionary, got: %s" % type(key)) ``` No matter what I do, it doesn't seem to use this as a string. Presumably, this may be something that can be reproduced with the constructed inventory plugin by itself, but I have not gotten that far yet. So something is forcing this into an integer, and it wasn't there is 2.9. That's about the extent of what I know at this point.
https://github.com/ansible/ansible/issues/70831
https://github.com/ansible/ansible/pull/70988
7195788ffe8ad63c8d9e36f2bc896c96ddfdaa49
b66d66027ece03f3f0a3fdb5fd6b8213965a2f1d
2020-07-23T03:01:56Z
python
2020-08-11T08:19:49Z
lib/ansible/config/base.yml
# Copyright (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) --- ALLOW_WORLD_READABLE_TMPFILES: name: Allow world-readable temporary files deprecated: why: moved to a per plugin approach that is more flexible. version: "2.14" alternatives: mostly the same config will work, but now controlled from the plugin itself and not using the general constant. default: False description: - This makes the temporary files created on the machine world-readable and will issue a warning instead of failing the task. - It is useful when becoming an unprivileged user. env: [] ini: - {key: allow_world_readable_tmpfiles, section: defaults} type: boolean yaml: {key: defaults.allow_world_readable_tmpfiles} version_added: "2.1" ANSIBLE_CONNECTION_PATH: name: Path of ansible-connection script default: null description: - Specify where to look for the ansible-connection script. This location will be checked before searching $PATH. - If null, ansible will start with the same directory as the ansible script. type: path env: [{name: ANSIBLE_CONNECTION_PATH}] ini: - {key: ansible_connection_path, section: persistent_connection} yaml: {key: persistent_connection.ansible_connection_path} version_added: "2.8" ANSIBLE_COW_SELECTION: name: Cowsay filter selection default: default description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them. env: [{name: ANSIBLE_COW_SELECTION}] ini: - {key: cow_selection, section: defaults} ANSIBLE_COW_WHITELIST: name: Cowsay filter whitelist default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www'] description: White list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates. env: [{name: ANSIBLE_COW_WHITELIST}] ini: - {key: cow_whitelist, section: defaults} type: list yaml: {key: display.cowsay_whitelist} ANSIBLE_FORCE_COLOR: name: Force color output default: False description: This options forces color mode even when running without a TTY or the "nocolor" setting is True. env: [{name: ANSIBLE_FORCE_COLOR}] ini: - {key: force_color, section: defaults} type: boolean yaml: {key: display.force_color} ANSIBLE_NOCOLOR: name: Suppress color output default: False description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information. env: [{name: ANSIBLE_NOCOLOR}] ini: - {key: nocolor, section: defaults} type: boolean yaml: {key: display.nocolor} ANSIBLE_NOCOWS: name: Suppress cowsay output default: False description: If you have cowsay installed but want to avoid the 'cows' (why????), use this. env: [{name: ANSIBLE_NOCOWS}] ini: - {key: nocows, section: defaults} type: boolean yaml: {key: display.i_am_no_fun} ANSIBLE_COW_PATH: name: Set path to cowsay command default: null description: Specify a custom cowsay path or swap in your cowsay implementation of choice env: [{name: ANSIBLE_COW_PATH}] ini: - {key: cowpath, section: defaults} type: string yaml: {key: display.cowpath} ANSIBLE_PIPELINING: name: Connection pipelining default: False description: - Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server, by executing many Ansible modules without actual file transfer. - This can result in a very significant performance improvement when enabled. - "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default." - This options is disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled. env: - name: ANSIBLE_PIPELINING - name: ANSIBLE_SSH_PIPELINING ini: - section: connection key: pipelining - section: ssh_connection key: pipelining type: boolean yaml: {key: plugins.connection.pipelining} ANSIBLE_SSH_ARGS: # TODO: move to ssh plugin default: -C -o ControlMaster=auto -o ControlPersist=60s description: - If set, this will override the Ansible default ssh arguments. - In particular, users may wish to raise the ControlPersist time to encourage performance. A value of 30 minutes may be appropriate. - Be aware that if `-o ControlPath` is set in ssh_args, the control path setting is not used. env: [{name: ANSIBLE_SSH_ARGS}] ini: - {key: ssh_args, section: ssh_connection} yaml: {key: ssh_connection.ssh_args} ANSIBLE_SSH_CONTROL_PATH: # TODO: move to ssh plugin default: null description: - This is the location to save ssh's ControlPath sockets, it uses ssh's variable substitution. - Since 2.3, if null, ansible will generate a unique hash. Use `%(directory)s` to indicate where to use the control dir path setting. - Before 2.3 it defaulted to `control_path=%(directory)s/ansible-ssh-%%h-%%p-%%r`. - Be aware that this setting is ignored if `-o ControlPath` is set in ssh args. env: [{name: ANSIBLE_SSH_CONTROL_PATH}] ini: - {key: control_path, section: ssh_connection} yaml: {key: ssh_connection.control_path} ANSIBLE_SSH_CONTROL_PATH_DIR: # TODO: move to ssh plugin default: ~/.ansible/cp description: - This sets the directory to use for ssh control path if the control path setting is null. - Also, provides the `%(directory)s` variable for the control path setting. env: [{name: ANSIBLE_SSH_CONTROL_PATH_DIR}] ini: - {key: control_path_dir, section: ssh_connection} yaml: {key: ssh_connection.control_path_dir} ANSIBLE_SSH_EXECUTABLE: # TODO: move to ssh plugin, note that ssh_utils refs this and needs to be updated if removed default: ssh description: - This defines the location of the ssh binary. It defaults to `ssh` which will use the first ssh binary available in $PATH. - This option is usually not required, it might be useful when access to system ssh is restricted, or when using ssh wrappers to connect to remote hosts. env: [{name: ANSIBLE_SSH_EXECUTABLE}] ini: - {key: ssh_executable, section: ssh_connection} yaml: {key: ssh_connection.ssh_executable} version_added: "2.2" ANSIBLE_SSH_RETRIES: # TODO: move to ssh plugin default: 0 description: Number of attempts to establish a connection before we give up and report the host as 'UNREACHABLE' env: [{name: ANSIBLE_SSH_RETRIES}] ini: - {key: retries, section: ssh_connection} type: integer yaml: {key: ssh_connection.retries} ANY_ERRORS_FATAL: name: Make Task failures fatal default: False description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors. env: - name: ANSIBLE_ANY_ERRORS_FATAL ini: - section: defaults key: any_errors_fatal type: boolean yaml: {key: errors.any_task_errors_fatal} version_added: "2.4" BECOME_ALLOW_SAME_USER: name: Allow becoming the same user default: False description: This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root. env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}] ini: - {key: become_allow_same_user, section: privilege_escalation} type: boolean yaml: {key: privilege_escalation.become_allow_same_user} AGNOSTIC_BECOME_PROMPT: name: Display an agnostic become prompt default: True type: boolean description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}] ini: - {key: agnostic_become_prompt, section: privilege_escalation} yaml: {key: privilege_escalation.agnostic_become_prompt} version_added: "2.5" CACHE_PLUGIN: name: Persistent Cache plugin default: memory description: Chooses which cache plugin to use, the default 'memory' is ephemeral. env: [{name: ANSIBLE_CACHE_PLUGIN}] ini: - {key: fact_caching, section: defaults} yaml: {key: facts.cache.plugin} CACHE_PLUGIN_CONNECTION: name: Cache Plugin URI default: ~ description: Defines connection or path information for the cache plugin env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}] ini: - {key: fact_caching_connection, section: defaults} yaml: {key: facts.cache.uri} CACHE_PLUGIN_PREFIX: name: Cache Plugin table prefix default: ansible_facts description: Prefix to use for cache plugin files/tables env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}] ini: - {key: fact_caching_prefix, section: defaults} yaml: {key: facts.cache.prefix} CACHE_PLUGIN_TIMEOUT: name: Cache Plugin expiration timeout default: 86400 description: Expiration timeout for the cache plugin data env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}] ini: - {key: fact_caching_timeout, section: defaults} type: integer yaml: {key: facts.cache.timeout} COLLECTIONS_SCAN_SYS_PATH: name: enable/disable scanning sys.path for installed collections default: true type: boolean env: - {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH} ini: - {key: collections_scan_sys_path, section: defaults} COLLECTIONS_PATHS: name: ordered list of root paths for loading installed Ansible collections content description: Colon separated paths in which Ansible will search for collections content. default: ~/.ansible/collections:/usr/share/ansible/collections type: pathspec env: - name: ANSIBLE_COLLECTIONS_PATHS # TODO: Deprecate this and ini once PATH has been in a few releases. - name: ANSIBLE_COLLECTIONS_PATH version_added: '2.10' ini: - key: collections_paths section: defaults - key: collections_path section: defaults version_added: '2.10' COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH: name: Defines behavior when loading a collection that does not support the current Ansible version description: - When a collection is loaded that does not support the running Ansible version (via the collection metadata key `requires_ansible`), the default behavior is to issue a warning and continue anyway. Setting this value to `ignore` skips the warning entirely, while setting it to `fatal` will immediately halt Ansible execution. env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}] ini: [{key: collections_on_ansible_version_mismatch, section: defaults}] choices: [error, warning, ignore] default: warning COLOR_CHANGED: name: Color for 'changed' task status default: yellow description: Defines the color to use on 'Changed' task status env: [{name: ANSIBLE_COLOR_CHANGED}] ini: - {key: changed, section: colors} yaml: {key: display.colors.changed} COLOR_CONSOLE_PROMPT: name: "Color for ansible-console's prompt task status" default: white description: Defines the default color to use for ansible-console env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}] ini: - {key: console_prompt, section: colors} version_added: "2.7" COLOR_DEBUG: name: Color for debug statements default: dark gray description: Defines the color to use when emitting debug messages env: [{name: ANSIBLE_COLOR_DEBUG}] ini: - {key: debug, section: colors} yaml: {key: display.colors.debug} COLOR_DEPRECATE: name: Color for deprecation messages default: purple description: Defines the color to use when emitting deprecation messages env: [{name: ANSIBLE_COLOR_DEPRECATE}] ini: - {key: deprecate, section: colors} yaml: {key: display.colors.deprecate} COLOR_DIFF_ADD: name: Color for diff added display default: green description: Defines the color to use when showing added lines in diffs env: [{name: ANSIBLE_COLOR_DIFF_ADD}] ini: - {key: diff_add, section: colors} yaml: {key: display.colors.diff.add} COLOR_DIFF_LINES: name: Color for diff lines display default: cyan description: Defines the color to use when showing diffs env: [{name: ANSIBLE_COLOR_DIFF_LINES}] ini: - {key: diff_lines, section: colors} COLOR_DIFF_REMOVE: name: Color for diff removed display default: red description: Defines the color to use when showing removed lines in diffs env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}] ini: - {key: diff_remove, section: colors} COLOR_ERROR: name: Color for error messages default: red description: Defines the color to use when emitting error messages env: [{name: ANSIBLE_COLOR_ERROR}] ini: - {key: error, section: colors} yaml: {key: colors.error} COLOR_HIGHLIGHT: name: Color for highlighting default: white description: Defines the color to use for highlighting env: [{name: ANSIBLE_COLOR_HIGHLIGHT}] ini: - {key: highlight, section: colors} COLOR_OK: name: Color for 'ok' task status default: green description: Defines the color to use when showing 'OK' task status env: [{name: ANSIBLE_COLOR_OK}] ini: - {key: ok, section: colors} COLOR_SKIP: name: Color for 'skip' task status default: cyan description: Defines the color to use when showing 'Skipped' task status env: [{name: ANSIBLE_COLOR_SKIP}] ini: - {key: skip, section: colors} COLOR_UNREACHABLE: name: Color for 'unreachable' host state default: bright red description: Defines the color to use on 'Unreachable' status env: [{name: ANSIBLE_COLOR_UNREACHABLE}] ini: - {key: unreachable, section: colors} COLOR_VERBOSE: name: Color for verbose messages default: blue description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's. env: [{name: ANSIBLE_COLOR_VERBOSE}] ini: - {key: verbose, section: colors} COLOR_WARN: name: Color for warning messages default: bright purple description: Defines the color to use when emitting warning messages env: [{name: ANSIBLE_COLOR_WARN}] ini: - {key: warn, section: colors} CONDITIONAL_BARE_VARS: name: Allow bare variable evaluation in conditionals default: False type: boolean description: - With this setting on (True), running conditional evaluation 'var' is treated differently than 'var.subkey' as the first is evaluated directly while the second goes through the Jinja2 parser. But 'false' strings in 'var' get evaluated as booleans. - With this setting off they both evaluate the same but in cases in which 'var' was 'false' (a string) it won't get evaluated as a boolean anymore. - Currently this setting defaults to 'True' but will soon change to 'False' and the setting itself will be removed in the future. - Expect that this setting eventually will be deprecated after 2.12 env: [{name: ANSIBLE_CONDITIONAL_BARE_VARS}] ini: - {key: conditional_bare_variables, section: defaults} version_added: "2.8" COVERAGE_REMOTE_OUTPUT: name: Sets the output directory and filename prefix to generate coverage run info. description: - Sets the output directory on the remote host to generate coverage reports to. - Currently only used for remote coverage on PowerShell modules. - This is for internal use only. env: - {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT} vars: - {name: _ansible_coverage_remote_output} type: str version_added: '2.9' COVERAGE_REMOTE_PATH_FILTER: name: Sets the list of paths to run coverage for. description: - A list of paths for files on the Ansible controller to run coverage for when executing on the remote host. - Only files that match the path glob will have its coverage collected. - Multiple path globs can be specified and are separated by ``:``. - Currently only used for remote coverage on PowerShell modules. - This is for internal use only. default: '*' env: - {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER} type: str version_added: '2.9' ACTION_WARNINGS: name: Toggle action warnings default: True description: - By default Ansible will issue a warning when received from a task action (module or action plugin) - These warnings can be silenced by adjusting this setting to False. env: [{name: ANSIBLE_ACTION_WARNINGS}] ini: - {key: action_warnings, section: defaults} type: boolean version_added: "2.5" COMMAND_WARNINGS: name: Command module warnings default: False description: - Ansible can issue a warning when the shell or command module is used and the command appears to be similar to an existing Ansible module. - These warnings can be silenced by adjusting this setting to False. You can also control this at the task level with the module option ``warn``. - As of version 2.11, this is disabled by default. env: [{name: ANSIBLE_COMMAND_WARNINGS}] ini: - {key: command_warnings, section: defaults} type: boolean version_added: "1.8" deprecated: why: The command warnings feature is being removed. version: "2.14" LOCALHOST_WARNING: name: Warning when using implicit inventory with only localhost default: True description: - By default Ansible will issue a warning when there are no hosts in the inventory. - These warnings can be silenced by adjusting this setting to False. env: [{name: ANSIBLE_LOCALHOST_WARNING}] ini: - {key: localhost_warning, section: defaults} type: boolean version_added: "2.6" DOC_FRAGMENT_PLUGIN_PATH: name: documentation fragment plugins path default: ~/.ansible/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins. env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}] ini: - {key: doc_fragment_plugins, section: defaults} type: pathspec DEFAULT_ACTION_PLUGIN_PATH: name: Action plugins path default: ~/.ansible/plugins/action:/usr/share/ansible/plugins/action description: Colon separated paths in which Ansible will search for Action Plugins. env: [{name: ANSIBLE_ACTION_PLUGINS}] ini: - {key: action_plugins, section: defaults} type: pathspec yaml: {key: plugins.action.path} DEFAULT_ALLOW_UNSAFE_LOOKUPS: name: Allow unsafe lookups default: False description: - "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo) to return data that is not marked 'unsafe'." - By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language, as this could represent a security risk. This option is provided to allow for backwards-compatibility, however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run through the templating engine late env: [] ini: - {key: allow_unsafe_lookups, section: defaults} type: boolean version_added: "2.2.3" DEFAULT_ASK_PASS: name: Ask for the login password default: False description: - This controls whether an Ansible playbook should prompt for a login password. If using SSH keys for authentication, you probably do not needed to change this setting. env: [{name: ANSIBLE_ASK_PASS}] ini: - {key: ask_pass, section: defaults} type: boolean yaml: {key: defaults.ask_pass} DEFAULT_ASK_VAULT_PASS: name: Ask for the vault password(s) default: False description: - This controls whether an Ansible playbook should prompt for a vault password. env: [{name: ANSIBLE_ASK_VAULT_PASS}] ini: - {key: ask_vault_pass, section: defaults} type: boolean DEFAULT_BECOME: name: Enable privilege escalation (become) default: False description: Toggles the use of privilege escalation, allowing you to 'become' another user after login. env: [{name: ANSIBLE_BECOME}] ini: - {key: become, section: privilege_escalation} type: boolean DEFAULT_BECOME_ASK_PASS: name: Ask for the privilege escalation (become) password default: False description: Toggle to prompt for privilege escalation password. env: [{name: ANSIBLE_BECOME_ASK_PASS}] ini: - {key: become_ask_pass, section: privilege_escalation} type: boolean DEFAULT_BECOME_METHOD: name: Choose privilege escalation method default: 'sudo' description: Privilege escalation method to use when `become` is enabled. env: [{name: ANSIBLE_BECOME_METHOD}] ini: - {section: privilege_escalation, key: become_method} DEFAULT_BECOME_EXE: name: Choose 'become' executable default: ~ description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH' env: [{name: ANSIBLE_BECOME_EXE}] ini: - {key: become_exe, section: privilege_escalation} DEFAULT_BECOME_FLAGS: name: Set 'become' executable options default: '' description: Flags to pass to the privilege escalation executable. env: [{name: ANSIBLE_BECOME_FLAGS}] ini: - {key: become_flags, section: privilege_escalation} BECOME_PLUGIN_PATH: name: Become plugins path default: ~/.ansible/plugins/become:/usr/share/ansible/plugins/become description: Colon separated paths in which Ansible will search for Become Plugins. env: [{name: ANSIBLE_BECOME_PLUGINS}] ini: - {key: become_plugins, section: defaults} type: pathspec version_added: "2.8" DEFAULT_BECOME_USER: # FIXME: should really be blank and make -u passing optional depending on it name: Set the user you 'become' via privilege escalation default: root description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified. env: [{name: ANSIBLE_BECOME_USER}] ini: - {key: become_user, section: privilege_escalation} yaml: {key: become.user} DEFAULT_CACHE_PLUGIN_PATH: name: Cache Plugins Path default: ~/.ansible/plugins/cache:/usr/share/ansible/plugins/cache description: Colon separated paths in which Ansible will search for Cache Plugins. env: [{name: ANSIBLE_CACHE_PLUGINS}] ini: - {key: cache_plugins, section: defaults} type: pathspec DEFAULT_CALLABLE_WHITELIST: name: Template 'callable' whitelist default: [] description: Whitelist of callable methods to be made available to template evaluation env: [{name: ANSIBLE_CALLABLE_WHITELIST}] ini: - {key: callable_whitelist, section: defaults} type: list DEFAULT_CALLBACK_PLUGIN_PATH: name: Callback Plugins Path default: ~/.ansible/plugins/callback:/usr/share/ansible/plugins/callback description: Colon separated paths in which Ansible will search for Callback Plugins. env: [{name: ANSIBLE_CALLBACK_PLUGINS}] ini: - {key: callback_plugins, section: defaults} type: pathspec yaml: {key: plugins.callback.path} DEFAULT_CALLBACK_WHITELIST: name: Callback Whitelist default: [] description: - "List of whitelisted callbacks, not all callbacks need whitelisting, but many of those shipped with Ansible do as we don't want them activated by default." env: [{name: ANSIBLE_CALLBACK_WHITELIST}] ini: - {key: callback_whitelist, section: defaults} type: list yaml: {key: plugins.callback.whitelist} DEFAULT_CLICONF_PLUGIN_PATH: name: Cliconf Plugins Path default: ~/.ansible/plugins/cliconf:/usr/share/ansible/plugins/cliconf description: Colon separated paths in which Ansible will search for Cliconf Plugins. env: [{name: ANSIBLE_CLICONF_PLUGINS}] ini: - {key: cliconf_plugins, section: defaults} type: pathspec DEFAULT_CONNECTION_PLUGIN_PATH: name: Connection Plugins Path default: ~/.ansible/plugins/connection:/usr/share/ansible/plugins/connection description: Colon separated paths in which Ansible will search for Connection Plugins. env: [{name: ANSIBLE_CONNECTION_PLUGINS}] ini: - {key: connection_plugins, section: defaults} type: pathspec yaml: {key: plugins.connection.path} DEFAULT_DEBUG: name: Debug mode default: False description: - "Toggles debug output in Ansible. This is *very* verbose and can hinder multiprocessing. Debug output can also include secret information despite no_log settings being enabled, which means debug mode should not be used in production." env: [{name: ANSIBLE_DEBUG}] ini: - {key: debug, section: defaults} type: boolean DEFAULT_EXECUTABLE: name: Target shell executable default: /bin/sh description: - "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target. Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is." env: [{name: ANSIBLE_EXECUTABLE}] ini: - {key: executable, section: defaults} DEFAULT_FACT_PATH: name: local fact path default: ~ description: - "This option allows you to globally configure a custom path for 'local_facts' for the implied M(setup) task when using fact gathering." - "If not set, it will fallback to the default from the M(setup) module: ``/etc/ansible/facts.d``." - "This does **not** affect user defined tasks that use the M(setup) module." env: [{name: ANSIBLE_FACT_PATH}] ini: - {key: fact_path, section: defaults} type: string yaml: {key: facts.gathering.fact_path} DEFAULT_FILTER_PLUGIN_PATH: name: Jinja2 Filter Plugins Path default: ~/.ansible/plugins/filter:/usr/share/ansible/plugins/filter description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins. env: [{name: ANSIBLE_FILTER_PLUGINS}] ini: - {key: filter_plugins, section: defaults} type: pathspec DEFAULT_FORCE_HANDLERS: name: Force handlers to run after failure default: False description: - This option controls if notified handlers run on a host even if a failure occurs on that host. - When false, the handlers will not run if a failure has occurred on a host. - This can also be set per play or on the command line. See Handlers and Failure for more details. env: [{name: ANSIBLE_FORCE_HANDLERS}] ini: - {key: force_handlers, section: defaults} type: boolean version_added: "1.9.1" DEFAULT_FORKS: name: Number of task forks default: 5 description: Maximum number of forks Ansible will use to execute tasks on target hosts. env: [{name: ANSIBLE_FORKS}] ini: - {key: forks, section: defaults} type: integer DEFAULT_GATHERING: name: Gathering behaviour default: 'implicit' description: - This setting controls the default policy of fact gathering (facts discovered about remote systems). - "When 'implicit' (the default), the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set." - "When 'explicit' the inverse is true, facts will not be gathered unless directly requested in the play." - "The 'smart' value means each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the playbook run." - "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin." env: [{name: ANSIBLE_GATHERING}] ini: - key: gathering section: defaults version_added: "1.6" choices: ['smart', 'explicit', 'implicit'] DEFAULT_GATHER_SUBSET: name: Gather facts subset default: ['all'] description: - Set the `gather_subset` option for the M(setup) task in the implicit fact gathering. See the module documentation for specifics. - "It does **not** apply to user defined M(setup) tasks." env: [{name: ANSIBLE_GATHER_SUBSET}] ini: - key: gather_subset section: defaults version_added: "2.1" type: list DEFAULT_GATHER_TIMEOUT: name: Gather facts timeout default: 10 description: - Set the timeout in seconds for the implicit fact gathering. - "It does **not** apply to user defined M(setup) tasks." env: [{name: ANSIBLE_GATHER_TIMEOUT}] ini: - {key: gather_timeout, section: defaults} type: integer yaml: {key: defaults.gather_timeout} DEFAULT_HANDLER_INCLUDES_STATIC: name: Make handler M(include) static default: False description: - "Since 2.0 M(include) can be 'dynamic', this setting (if True) forces that if the include appears in a ``handlers`` section to be 'static'." env: [{name: ANSIBLE_HANDLER_INCLUDES_STATIC}] ini: - {key: handler_includes_static, section: defaults} type: boolean deprecated: why: include itself is deprecated and this setting will not matter in the future version: "2.12" alternatives: none as its already built into the decision between include_tasks and import_tasks DEFAULT_HASH_BEHAVIOUR: name: Hash merge behaviour default: replace type: string choices: ["replace", "merge"] description: - This setting controls how variables merge in Ansible. By default Ansible will override variables in specific precedence orders, as described in Variables. When a variable of higher precedence wins, it will replace the other value. - "Some users prefer that variables that are hashes (aka 'dictionaries' in Python terms) are merged. This setting is called 'merge'. This is not the default behavior and it does not affect variables whose values are scalars (integers, strings) or arrays. We generally recommend not using this setting unless you think you have an absolute need for it, and playbooks in the official examples repos do not use this setting" - In version 2.0 a ``combine`` filter was added to allow doing this for a particular variable (described in Filters). env: [{name: ANSIBLE_HASH_BEHAVIOUR}] ini: - {key: hash_behaviour, section: defaults} deprecated: why: This feature is fragile and not portable, leading to continual confusion and misuse version: "2.13" alternatives: the ``combine`` filter explicitly DEFAULT_HOST_LIST: name: Inventory Source default: /etc/ansible/hosts description: Comma separated list of Ansible inventory sources env: - name: ANSIBLE_INVENTORY expand_relative_paths: True ini: - key: inventory section: defaults type: pathlist yaml: {key: defaults.inventory} DEFAULT_HTTPAPI_PLUGIN_PATH: name: HttpApi Plugins Path default: ~/.ansible/plugins/httpapi:/usr/share/ansible/plugins/httpapi description: Colon separated paths in which Ansible will search for HttpApi Plugins. env: [{name: ANSIBLE_HTTPAPI_PLUGINS}] ini: - {key: httpapi_plugins, section: defaults} type: pathspec DEFAULT_INTERNAL_POLL_INTERVAL: name: Internal poll interval default: 0.001 env: [] ini: - {key: internal_poll_interval, section: defaults} type: float version_added: "2.2" description: - This sets the interval (in seconds) of Ansible internal processes polling each other. Lower values improve performance with large playbooks at the expense of extra CPU load. Higher values are more suitable for Ansible usage in automation scenarios, when UI responsiveness is not required but CPU usage might be a concern. - "The default corresponds to the value hardcoded in Ansible <= 2.1" DEFAULT_INVENTORY_PLUGIN_PATH: name: Inventory Plugins Path default: ~/.ansible/plugins/inventory:/usr/share/ansible/plugins/inventory description: Colon separated paths in which Ansible will search for Inventory Plugins. env: [{name: ANSIBLE_INVENTORY_PLUGINS}] ini: - {key: inventory_plugins, section: defaults} type: pathspec DEFAULT_JINJA2_EXTENSIONS: name: Enabled Jinja2 extensions default: [] description: - This is a developer-specific feature that allows enabling additional Jinja2 extensions. - "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)" env: [{name: ANSIBLE_JINJA2_EXTENSIONS}] ini: - {key: jinja2_extensions, section: defaults} DEFAULT_JINJA2_NATIVE: name: Use Jinja2's NativeEnvironment for templating default: False description: This option preserves variable types during template operations. This requires Jinja2 >= 2.10. env: [{name: ANSIBLE_JINJA2_NATIVE}] ini: - {key: jinja2_native, section: defaults} type: boolean yaml: {key: jinja2_native} version_added: 2.7 DEFAULT_KEEP_REMOTE_FILES: name: Keep remote files default: False description: - Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote. - If this option is enabled it will disable ``ANSIBLE_PIPELINING``. env: [{name: ANSIBLE_KEEP_REMOTE_FILES}] ini: - {key: keep_remote_files, section: defaults} type: boolean DEFAULT_LIBVIRT_LXC_NOSECLABEL: # TODO: move to plugin name: No security label on Lxc default: False description: - "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh. This is necessary when running on systems which do not have SELinux." env: - name: LIBVIRT_LXC_NOSECLABEL deprecated: why: environment variables without "ANSIBLE_" prefix are deprecated version: "2.12" alternatives: the "ANSIBLE_LIBVIRT_LXC_NOSECLABEL" environment variable - name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL ini: - {key: libvirt_lxc_noseclabel, section: selinux} type: boolean version_added: "2.1" DEFAULT_LOAD_CALLBACK_PLUGINS: name: Load callbacks for adhoc default: False description: - Controls whether callback plugins are loaded when running /usr/bin/ansible. This may be used to log activity from the command line, send notifications, and so on. Callback plugins are always loaded for ``ansible-playbook``. env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}] ini: - {key: bin_ansible_callbacks, section: defaults} type: boolean version_added: "1.8" DEFAULT_LOCAL_TMP: name: Controller temporary directory default: ~/.ansible/tmp description: Temporary directory for Ansible to use on the controller. env: [{name: ANSIBLE_LOCAL_TEMP}] ini: - {key: local_tmp, section: defaults} type: tmppath DEFAULT_LOG_PATH: name: Ansible log file path default: ~ description: File to which Ansible will log on the controller. When empty logging is disabled. env: [{name: ANSIBLE_LOG_PATH}] ini: - {key: log_path, section: defaults} type: path DEFAULT_LOG_FILTER: name: Name filters for python logger default: [] description: List of logger names to filter out of the log file env: [{name: ANSIBLE_LOG_FILTER}] ini: - {key: log_filter, section: defaults} type: list DEFAULT_LOOKUP_PLUGIN_PATH: name: Lookup Plugins Path description: Colon separated paths in which Ansible will search for Lookup Plugins. default: ~/.ansible/plugins/lookup:/usr/share/ansible/plugins/lookup env: [{name: ANSIBLE_LOOKUP_PLUGINS}] ini: - {key: lookup_plugins, section: defaults} type: pathspec yaml: {key: defaults.lookup_plugins} DEFAULT_MANAGED_STR: name: Ansible managed default: 'Ansible managed' description: Sets the macro for the 'ansible_managed' variable available for M(template) and M(win_template) modules. This is only relevant for those two modules. env: [] ini: - {key: ansible_managed, section: defaults} yaml: {key: defaults.ansible_managed} DEFAULT_MODULE_ARGS: name: Adhoc default arguments default: '' description: - This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified. env: [{name: ANSIBLE_MODULE_ARGS}] ini: - {key: module_args, section: defaults} DEFAULT_MODULE_COMPRESSION: name: Python module compression default: ZIP_DEFLATED description: Compression scheme to use when transferring Python modules to the target. env: [] ini: - {key: module_compression, section: defaults} # vars: # - name: ansible_module_compression DEFAULT_MODULE_NAME: name: Default adhoc module default: command description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``." env: [] ini: - {key: module_name, section: defaults} DEFAULT_MODULE_PATH: name: Modules Path description: Colon separated paths in which Ansible will search for Modules. default: ~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules env: [{name: ANSIBLE_LIBRARY}] ini: - {key: library, section: defaults} type: pathspec DEFAULT_MODULE_UTILS_PATH: name: Module Utils Path description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules. default: ~/.ansible/plugins/module_utils:/usr/share/ansible/plugins/module_utils env: [{name: ANSIBLE_MODULE_UTILS}] ini: - {key: module_utils, section: defaults} type: pathspec DEFAULT_NETCONF_PLUGIN_PATH: name: Netconf Plugins Path default: ~/.ansible/plugins/netconf:/usr/share/ansible/plugins/netconf description: Colon separated paths in which Ansible will search for Netconf Plugins. env: [{name: ANSIBLE_NETCONF_PLUGINS}] ini: - {key: netconf_plugins, section: defaults} type: pathspec DEFAULT_NO_LOG: name: No log default: False description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures." env: [{name: ANSIBLE_NO_LOG}] ini: - {key: no_log, section: defaults} type: boolean DEFAULT_NO_TARGET_SYSLOG: name: No syslog on target default: False description: - Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer style PowerShell modules from writting to the event log. env: [{name: ANSIBLE_NO_TARGET_SYSLOG}] ini: - {key: no_target_syslog, section: defaults} vars: - name: ansible_no_target_syslog version_added: '2.10' type: boolean yaml: {key: defaults.no_target_syslog} DEFAULT_NULL_REPRESENTATION: name: Represent a null default: ~ description: What templating should return as a 'null' value. When not set it will let Jinja2 decide. env: [{name: ANSIBLE_NULL_REPRESENTATION}] ini: - {key: null_representation, section: defaults} type: none DEFAULT_POLL_INTERVAL: name: Async poll interval default: 15 description: - For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling), this is how often to check back on the status of those tasks when an explicit poll interval is not supplied. The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and providing a quick turnaround when something may have completed. env: [{name: ANSIBLE_POLL_INTERVAL}] ini: - {key: poll_interval, section: defaults} type: integer DEFAULT_PRIVATE_KEY_FILE: name: Private key file default: ~ description: - Option for connections using a certificate or key file to authenticate, rather than an agent or passwords, you can set the default value here to avoid re-specifying --private-key with every invocation. env: [{name: ANSIBLE_PRIVATE_KEY_FILE}] ini: - {key: private_key_file, section: defaults} type: path DEFAULT_PRIVATE_ROLE_VARS: name: Private role variables default: False description: - Makes role variables inaccessible from other roles. - This was introduced as a way to reset role variables to default values if a role is used more than once in a playbook. env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}] ini: - {key: private_role_vars, section: defaults} type: boolean yaml: {key: defaults.private_role_vars} DEFAULT_REMOTE_PORT: name: Remote port default: ~ description: Port to use in remote connections, when blank it will use the connection plugin default. env: [{name: ANSIBLE_REMOTE_PORT}] ini: - {key: remote_port, section: defaults} type: integer yaml: {key: defaults.remote_port} DEFAULT_REMOTE_USER: name: Login/Remote User default: description: - Sets the login user for the target machines - "When blank it uses the connection plugin's default, normally the user currently executing Ansible." env: [{name: ANSIBLE_REMOTE_USER}] ini: - {key: remote_user, section: defaults} DEFAULT_ROLES_PATH: name: Roles path default: ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles description: Colon separated paths in which Ansible will search for Roles. env: [{name: ANSIBLE_ROLES_PATH}] expand_relative_paths: True ini: - {key: roles_path, section: defaults} type: pathspec yaml: {key: defaults.roles_path} DEFAULT_SCP_IF_SSH: # TODO: move to ssh plugin default: smart description: - "Preferred method to use when transferring files over ssh." - When set to smart, Ansible will try them until one succeeds or they all fail. - If set to True, it will force 'scp', if False it will use 'sftp'. env: [{name: ANSIBLE_SCP_IF_SSH}] ini: - {key: scp_if_ssh, section: ssh_connection} DEFAULT_SELINUX_SPECIAL_FS: name: Problematic file systems default: fuse, nfs, vboxsf, ramfs, 9p, vfat description: - "Some filesystems do not support safe operations and/or return inconsistent errors, this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors." - Data corruption may occur and writes are not always verified when a filesystem is in the list. env: - name: ANSIBLE_SELINUX_SPECIAL_FS version_added: "2.9" ini: - {key: special_context_filesystems, section: selinux} type: list DEFAULT_SFTP_BATCH_MODE: # TODO: move to ssh plugin default: True description: 'TODO: write it' env: [{name: ANSIBLE_SFTP_BATCH_MODE}] ini: - {key: sftp_batch_mode, section: ssh_connection} type: boolean yaml: {key: ssh_connection.sftp_batch_mode} DEFAULT_SSH_TRANSFER_METHOD: # TODO: move to ssh plugin default: description: 'unused?' # - "Preferred method to use when transferring files over ssh" # - Setting to smart will try them until one succeeds or they all fail #choices: ['sftp', 'scp', 'dd', 'smart'] env: [{name: ANSIBLE_SSH_TRANSFER_METHOD}] ini: - {key: transfer_method, section: ssh_connection} DEFAULT_STDOUT_CALLBACK: name: Main display callback plugin default: default description: - "Set the main callback used to display Ansible output, you can only have one at a time." - You can have many other callbacks, but just one can be in charge of stdout. env: [{name: ANSIBLE_STDOUT_CALLBACK}] ini: - {key: stdout_callback, section: defaults} ENABLE_TASK_DEBUGGER: name: Whether to enable the task debugger default: False description: - Whether or not to enable the task debugger, this previously was done as a strategy plugin. - Now all strategy plugins can inherit this behavior. The debugger defaults to activating when - a task is failed on unreachable. Use the debugger keyword for more flexibility. type: boolean env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}] ini: - {key: enable_task_debugger, section: defaults} version_added: "2.5" TASK_DEBUGGER_IGNORE_ERRORS: name: Whether a failed task with ignore_errors=True will still invoke the debugger default: True description: - This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True is specified. - True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors. type: boolean env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}] ini: - {key: task_debugger_ignore_errors, section: defaults} version_added: "2.7" DEFAULT_STRATEGY: name: Implied strategy default: 'linear' description: Set the default strategy used for plays. env: [{name: ANSIBLE_STRATEGY}] ini: - {key: strategy, section: defaults} version_added: "2.3" DEFAULT_STRATEGY_PLUGIN_PATH: name: Strategy Plugins Path description: Colon separated paths in which Ansible will search for Strategy Plugins. default: ~/.ansible/plugins/strategy:/usr/share/ansible/plugins/strategy env: [{name: ANSIBLE_STRATEGY_PLUGINS}] ini: - {key: strategy_plugins, section: defaults} type: pathspec DEFAULT_SU: default: False description: 'Toggle the use of "su" for tasks.' env: [{name: ANSIBLE_SU}] ini: - {key: su, section: defaults} type: boolean yaml: {key: defaults.su} DEFAULT_SYSLOG_FACILITY: name: syslog facility default: LOG_USER description: Syslog facility to use when Ansible logs to the remote target env: [{name: ANSIBLE_SYSLOG_FACILITY}] ini: - {key: syslog_facility, section: defaults} DEFAULT_TASK_INCLUDES_STATIC: name: Task include static default: False description: - The `include` tasks can be static or dynamic, this toggles the default expected behaviour if autodetection fails and it is not explicitly set in task. env: [{name: ANSIBLE_TASK_INCLUDES_STATIC}] ini: - {key: task_includes_static, section: defaults} type: boolean version_added: "2.1" deprecated: why: include itself is deprecated and this setting will not matter in the future version: "2.12" alternatives: None, as its already built into the decision between include_tasks and import_tasks DEFAULT_TERMINAL_PLUGIN_PATH: name: Terminal Plugins Path default: ~/.ansible/plugins/terminal:/usr/share/ansible/plugins/terminal description: Colon separated paths in which Ansible will search for Terminal Plugins. env: [{name: ANSIBLE_TERMINAL_PLUGINS}] ini: - {key: terminal_plugins, section: defaults} type: pathspec DEFAULT_TEST_PLUGIN_PATH: name: Jinja2 Test Plugins Path description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins. default: ~/.ansible/plugins/test:/usr/share/ansible/plugins/test env: [{name: ANSIBLE_TEST_PLUGINS}] ini: - {key: test_plugins, section: defaults} type: pathspec DEFAULT_TIMEOUT: name: Connection timeout default: 10 description: This is the default timeout for connection plugins to use. env: [{name: ANSIBLE_TIMEOUT}] ini: - {key: timeout, section: defaults} type: integer DEFAULT_TRANSPORT: # note that ssh_utils refs this and needs to be updated if removed name: Connection plugin default: smart description: "Default connection plugin to use, the 'smart' option will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions" env: [{name: ANSIBLE_TRANSPORT}] ini: - {key: transport, section: defaults} DEFAULT_UNDEFINED_VAR_BEHAVIOR: name: Jinja2 fail on undefined default: True version_added: "1.3" description: - When True, this causes ansible templating to fail steps that reference variable names that are likely typoed. - "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written." env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}] ini: - {key: error_on_undefined_vars, section: defaults} type: boolean DEFAULT_VARS_PLUGIN_PATH: name: Vars Plugins Path default: ~/.ansible/plugins/vars:/usr/share/ansible/plugins/vars description: Colon separated paths in which Ansible will search for Vars Plugins. env: [{name: ANSIBLE_VARS_PLUGINS}] ini: - {key: vars_plugins, section: defaults} type: pathspec # TODO: unused? #DEFAULT_VAR_COMPRESSION_LEVEL: # default: 0 # description: 'TODO: write it' # env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}] # ini: # - {key: var_compression_level, section: defaults} # type: integer # yaml: {key: defaults.var_compression_level} DEFAULT_VAULT_ID_MATCH: name: Force vault id match default: False description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id' env: [{name: ANSIBLE_VAULT_ID_MATCH}] ini: - {key: vault_id_match, section: defaults} yaml: {key: defaults.vault_id_match} DEFAULT_VAULT_IDENTITY: name: Vault id label default: default description: 'The label to use for the default vault id label in cases where a vault id label is not provided' env: [{name: ANSIBLE_VAULT_IDENTITY}] ini: - {key: vault_identity, section: defaults} yaml: {key: defaults.vault_identity} DEFAULT_VAULT_ENCRYPT_IDENTITY: name: Vault id to use for encryption default: description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.' env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}] ini: - {key: vault_encrypt_identity, section: defaults} yaml: {key: defaults.vault_encrypt_identity} DEFAULT_VAULT_IDENTITY_LIST: name: Default vault ids default: [] description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.' env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}] ini: - {key: vault_identity_list, section: defaults} type: list yaml: {key: defaults.vault_identity_list} DEFAULT_VAULT_PASSWORD_FILE: name: Vault password file default: ~ description: 'The vault password file to use. Equivalent to --vault-password-file or --vault-id' env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}] ini: - {key: vault_password_file, section: defaults} type: path yaml: {key: defaults.vault_password_file} DEFAULT_VERBOSITY: name: Verbosity default: 0 description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line. env: [{name: ANSIBLE_VERBOSITY}] ini: - {key: verbosity, section: defaults} type: integer DEPRECATION_WARNINGS: name: Deprecation messages default: True description: "Toggle to control the showing of deprecation warnings" env: [{name: ANSIBLE_DEPRECATION_WARNINGS}] ini: - {key: deprecation_warnings, section: defaults} type: boolean DEVEL_WARNING: name: Running devel warning default: True description: Toggle to control showing warnings related to running devel env: [{name: ANSIBLE_DEVEL_WARNING}] ini: - {key: devel_warning, section: defaults} type: boolean DIFF_ALWAYS: name: Show differences default: False description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``. env: [{name: ANSIBLE_DIFF_ALWAYS}] ini: - {key: always, section: diff} type: bool DIFF_CONTEXT: name: Difference context default: 3 description: How many lines of context to show when displaying the differences between files. env: [{name: ANSIBLE_DIFF_CONTEXT}] ini: - {key: context, section: diff} type: integer DISPLAY_ARGS_TO_STDOUT: name: Show task arguments default: False description: - "Normally ``ansible-playbook`` will print a header for each task that is run. These headers will contain the name: field from the task if you specified one. If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running. Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action. If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header." - "This setting defaults to False because there is a chance that you have sensitive values in your parameters and you do not want those to be printed." - "If you set this to True you should be sure that you have secured your environment's stdout (no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values See How do I keep secret data in my playbook? for more information." env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}] ini: - {key: display_args_to_stdout, section: defaults} type: boolean version_added: "2.1" DISPLAY_SKIPPED_HOSTS: name: Show skipped results default: True description: "Toggle to control displaying skipped task/host entries in a task in the default callback" env: - name: DISPLAY_SKIPPED_HOSTS deprecated: why: environment variables without "ANSIBLE_" prefix are deprecated version: "2.12" alternatives: the "ANSIBLE_DISPLAY_SKIPPED_HOSTS" environment variable - name: ANSIBLE_DISPLAY_SKIPPED_HOSTS ini: - {key: display_skipped_hosts, section: defaults} type: boolean DOCSITE_ROOT_URL: name: Root docsite URL default: https://docs.ansible.com/ansible/ description: Root docsite URL used to generate docs URLs in warning/error text; must be an absolute URL with valid scheme and trailing slash. ini: - {key: docsite_root_url, section: defaults} version_added: "2.8" DUPLICATE_YAML_DICT_KEY: name: Controls ansible behaviour when finding duplicate keys in YAML. default: warn description: - By default Ansible will issue a warning when a duplicate dict key is encountered in YAML. - These warnings can be silenced by adjusting this setting to False. env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}] ini: - {key: duplicate_dict_key, section: defaults} type: string choices: ['warn', 'error', 'ignore'] version_added: "2.9" ERROR_ON_MISSING_HANDLER: name: Missing handler error default: True description: "Toggle to allow missing handlers to become a warning instead of an error when notifying." env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}] ini: - {key: error_on_missing_handler, section: defaults} type: boolean CONNECTION_FACTS_MODULES: name: Map of connections to fact modules default: # use ansible.legacy names on unqualified facts modules to allow library/ overrides asa: ansible.legacy.asa_facts cisco.asa.asa: cisco.asa.asa_facts eos: ansible.legacy.eos_facts arista.eos.eos: arista.eos.eos_facts frr: ansible.legacy.frr_facts frr.frr.frr: frr.frr.frr_facts ios: ansible.legacy.ios_facts cisco.ios.ios: cisco.ios.ios_facts iosxr: ansible.legacy.iosxr_facts cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts junos: ansible.legacy.junos_facts junipernetworks.junos.junos: junipernetworks.junos.junos_facts nxos: ansible.legacy.nxos_facts cisco.nxos.nxos: cisco.nxos.nxos_facts vyos: ansible.legacy.vyos_facts vyos.vyos.vyos: vyos.vyos.vyos_facts exos: ansible.legacy.exos_facts extreme.exos.exos: extreme.exos.exos_facts slxos: ansible.legacy.slxos_facts extreme.slxos.slxos: extreme.slxos.slxos_facts voss: ansible.legacy.voss_facts extreme.voss.voss: extreme.voss.voss_facts ironware: ansible.legacy.ironware_facts community.network.ironware: community.network.ironware_facts description: "Which modules to run during a play's fact gathering stage based on connection" env: [{name: ANSIBLE_CONNECTION_FACTS_MODULES}] ini: - {key: connection_facts_modules, section: defaults} type: dict FACTS_MODULES: name: Gather Facts Modules default: - smart description: "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type." env: [{name: ANSIBLE_FACTS_MODULES}] ini: - {key: facts_modules, section: defaults} type: list vars: - name: ansible_facts_modules GALAXY_IGNORE_CERTS: name: Galaxy validate certs default: False description: - If set to yes, ansible-galaxy will not validate TLS certificates. This can be useful for testing against a server with a self-signed certificate. env: [{name: ANSIBLE_GALAXY_IGNORE}] ini: - {key: ignore_certs, section: galaxy} type: boolean GALAXY_ROLE_SKELETON: name: Galaxy role or collection skeleton directory default: description: Role or collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``, same as ``--role-skeleton``. env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}] ini: - {key: role_skeleton, section: galaxy} type: path GALAXY_ROLE_SKELETON_IGNORE: name: Galaxy skeleton ignore default: ["^.git$", "^.*/.git_keep$"] description: patterns of files to ignore inside a Galaxy role or collection skeleton directory env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}] ini: - {key: role_skeleton_ignore, section: galaxy} type: list # TODO: unused? #GALAXY_SCMS: # name: Galaxy SCMS # default: git, hg # description: Available galaxy source control management systems. # env: [{name: ANSIBLE_GALAXY_SCMS}] # ini: # - {key: scms, section: galaxy} # type: list GALAXY_SERVER: default: https://galaxy.ansible.com description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source." env: [{name: ANSIBLE_GALAXY_SERVER}] ini: - {key: server, section: galaxy} yaml: {key: galaxy.server} GALAXY_SERVER_LIST: description: - A list of Galaxy servers to use when installing a collection. - The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details. - 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.' - The order of servers in this list is used to as the order in which a collection is resolved. - Setting this config option will ignore the :ref:`galaxy_server` config option. env: [{name: ANSIBLE_GALAXY_SERVER_LIST}] ini: - {key: server_list, section: galaxy} type: list version_added: "2.9" GALAXY_TOKEN: default: null description: "GitHub personal access token" env: [{name: ANSIBLE_GALAXY_TOKEN}] ini: - {key: token, section: galaxy} yaml: {key: galaxy.token} GALAXY_TOKEN_PATH: default: ~/.ansible/galaxy_token description: "Local path to galaxy access token file" env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}] ini: - {key: token_path, section: galaxy} type: path version_added: "2.9" GALAXY_DISPLAY_PROGRESS: default: ~ description: - Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when outputing the stdout to a file. - This config option controls whether the display wheel is shown or not. - The default is to show the display wheel if stdout has a tty. env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}] ini: - {key: display_progress, section: galaxy} type: bool version_added: "2.10" HOST_KEY_CHECKING: name: Check host keys default: True description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host' env: [{name: ANSIBLE_HOST_KEY_CHECKING}] ini: - {key: host_key_checking, section: defaults} type: boolean HOST_PATTERN_MISMATCH: name: Control host pattern mismatch behaviour default: 'warning' description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}] ini: - {key: host_pattern_mismatch, section: inventory} choices: ['warning', 'error', 'ignore'] version_added: "2.8" INTERPRETER_PYTHON: name: Python interpreter path (or automatic discovery behavior) used for module execution default: auto_legacy env: [{name: ANSIBLE_PYTHON_INTERPRETER}] ini: - {key: interpreter_python, section: defaults} vars: - {name: ansible_python_interpreter} version_added: "2.8" description: - Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode. Supported discovery modes are ``auto``, ``auto_silent``, and ``auto_legacy`` (the default). All discovery modes employ a lookup table to use the included system Python (on distributions known to include one), falling back to a fixed ordered list of well-known Python interpreter locations if a platform-specific default is not available. The fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters installed later may change which one is used). This warning behavior can be disabled by setting ``auto_silent``. The default value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility with older Ansible releases that always defaulted to ``/usr/bin/python``, will use that interpreter if present (and issue a warning that the default behavior will change to that of ``auto`` in a future Ansible release. INTERPRETER_PYTHON_DISTRO_MAP: name: Mapping of known included platform pythons for various Linux distros default: centos: &rhelish '6': /usr/bin/python '8': /usr/libexec/platform-python debian: '10': /usr/bin/python3 fedora: '23': /usr/bin/python3 redhat: *rhelish rhel: *rhelish ubuntu: '14': /usr/bin/python '16': /usr/bin/python3 version_added: "2.8" # FUTURE: add inventory override once we're sure it can't be abused by a rogue target # FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc? INTERPRETER_PYTHON_FALLBACK: name: Ordered list of Python interpreters to check for in discovery default: - /usr/bin/python - python3.7 - python3.6 - python3.5 - python2.7 - python2.6 - /usr/libexec/platform-python - /usr/bin/python3 - python # FUTURE: add inventory override once we're sure it can't be abused by a rogue target version_added: "2.8" TRANSFORM_INVALID_GROUP_CHARS: name: Transform invalid characters in group names default: 'never' description: - Make ansible transform invalid characters in group names supplied by inventory sources. - If 'never' it will allow for the group name but warn about the issue. - When 'ignore', it does the same as 'never', without issuing a warning. - When 'always' it will replace any invalid characters with '_' (underscore) and warn the user - When 'silently', it does the same as 'always', without issuing a warning. env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}] ini: - {key: force_valid_group_names, section: defaults} type: string choices: ['always', 'never', 'ignore', 'silently'] version_added: '2.8' INVALID_TASK_ATTRIBUTE_FAILED: name: Controls whether invalid attributes for a task result in errors instead of warnings default: True description: If 'false', invalid attributes for a task will result in warnings instead of errors type: boolean env: - name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED ini: - key: invalid_task_attribute_failed section: defaults version_added: "2.7" INVENTORY_ANY_UNPARSED_IS_FAILED: name: Controls whether any unparseable inventory source is a fatal error default: False description: > If 'true', it is a fatal error when any given inventory source cannot be successfully parsed by any available inventory plugin; otherwise, this situation only attracts a warning. type: boolean env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}] ini: - {key: any_unparsed_is_failed, section: inventory} version_added: "2.7" INVENTORY_CACHE_ENABLED: name: Inventory caching enabled default: False description: Toggle to turn on inventory caching env: [{name: ANSIBLE_INVENTORY_CACHE}] ini: - {key: cache, section: inventory} type: bool INVENTORY_CACHE_PLUGIN: name: Inventory cache plugin description: The plugin for caching inventory. If INVENTORY_CACHE_PLUGIN is not provided CACHE_PLUGIN can be used instead. env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}] ini: - {key: cache_plugin, section: inventory} INVENTORY_CACHE_PLUGIN_CONNECTION: name: Inventory cache plugin URI to override the defaults section description: The inventory cache connection. If INVENTORY_CACHE_PLUGIN_CONNECTION is not provided CACHE_PLUGIN_CONNECTION can be used instead. env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}] ini: - {key: cache_connection, section: inventory} INVENTORY_CACHE_PLUGIN_PREFIX: name: Inventory cache plugin table prefix description: The table prefix for the cache plugin. If INVENTORY_CACHE_PLUGIN_PREFIX is not provided CACHE_PLUGIN_PREFIX can be used instead. env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}] default: ansible_facts ini: - {key: cache_prefix, section: inventory} INVENTORY_CACHE_TIMEOUT: name: Inventory cache plugin expiration timeout description: Expiration timeout for the inventory cache plugin data. If INVENTORY_CACHE_TIMEOUT is not provided CACHE_TIMEOUT can be used instead. default: 3600 env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}] ini: - {key: cache_timeout, section: inventory} INVENTORY_ENABLED: name: Active Inventory plugins default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml'] description: List of enabled inventory plugins, it also determines the order in which they are used. env: [{name: ANSIBLE_INVENTORY_ENABLED}] ini: - {key: enable_plugins, section: inventory} type: list INVENTORY_EXPORT: name: Set ansible-inventory into export mode default: False description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting. env: [{name: ANSIBLE_INVENTORY_EXPORT}] ini: - {key: export, section: inventory} type: bool INVENTORY_IGNORE_EXTS: name: Inventory ignore extensions default: "{{(BLACKLIST_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}" description: List of extensions to ignore when using a directory as an inventory source env: [{name: ANSIBLE_INVENTORY_IGNORE}] ini: - {key: inventory_ignore_extensions, section: defaults} - {key: ignore_extensions, section: inventory} type: list INVENTORY_IGNORE_PATTERNS: name: Inventory ignore patterns default: [] description: List of patterns to ignore when using a directory as an inventory source env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}] ini: - {key: inventory_ignore_patterns, section: defaults} - {key: ignore_patterns, section: inventory} type: list INVENTORY_UNPARSED_IS_FAILED: name: Unparsed Inventory failure default: False description: > If 'true' it is a fatal error if every single potential inventory source fails to parse, otherwise this situation will only attract a warning. env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}] ini: - {key: unparsed_is_failed, section: inventory} type: bool MAX_FILE_SIZE_FOR_DIFF: name: Diff maximum file size default: 104448 description: Maximum size of files to be considered for diff display env: [{name: ANSIBLE_MAX_DIFF_SIZE}] ini: - {key: max_diff_size, section: defaults} type: int NETWORK_GROUP_MODULES: name: Network module families default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos] description: 'TODO: write it' env: - name: NETWORK_GROUP_MODULES deprecated: why: environment variables without "ANSIBLE_" prefix are deprecated version: "2.12" alternatives: the "ANSIBLE_NETWORK_GROUP_MODULES" environment variable - name: ANSIBLE_NETWORK_GROUP_MODULES ini: - {key: network_group_modules, section: defaults} type: list yaml: {key: defaults.network_group_modules} INJECT_FACTS_AS_VARS: default: True description: - Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace. - Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix. env: [{name: ANSIBLE_INJECT_FACT_VARS}] ini: - {key: inject_facts_as_vars, section: defaults} type: boolean version_added: "2.5" MODULE_IGNORE_EXTS: name: Module ignore extensions default: "{{(BLACKLIST_EXTS + ('.yaml', '.yml', '.ini'))}}" description: - List of extensions to ignore when looking for modules to load - This is for blacklisting script and binary module fallback extensions env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}] ini: - {key: module_ignore_exts, section: defaults} type: list OLD_PLUGIN_CACHE_CLEARING: description: Previouslly Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour. env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}] ini: - {key: old_plugin_cache_clear, section: defaults} type: boolean default: False version_added: "2.8" PARAMIKO_HOST_KEY_AUTO_ADD: # TODO: move to plugin default: False description: 'TODO: write it' env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}] ini: - {key: host_key_auto_add, section: paramiko_connection} type: boolean PARAMIKO_LOOK_FOR_KEYS: name: look for keys default: True description: 'TODO: write it' env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}] ini: - {key: look_for_keys, section: paramiko_connection} type: boolean PERSISTENT_CONTROL_PATH_DIR: name: Persistence socket path default: ~/.ansible/pc description: Path to socket to be used by the connection persistence system. env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}] ini: - {key: control_path_dir, section: persistent_connection} type: path PERSISTENT_CONNECT_TIMEOUT: name: Persistence timeout default: 30 description: This controls how long the persistent connection will remain idle before it is destroyed. env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}] ini: - {key: connect_timeout, section: persistent_connection} type: integer PERSISTENT_CONNECT_RETRY_TIMEOUT: name: Persistence connection retry timeout default: 15 description: This controls the retry timeout for persistent connection to connect to the local domain socket. env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}] ini: - {key: connect_retry_timeout, section: persistent_connection} type: integer PERSISTENT_COMMAND_TIMEOUT: name: Persistence command timeout default: 30 description: This controls the amount of time to wait for response from remote device before timing out persistent connection. env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}] ini: - {key: command_timeout, section: persistent_connection} type: int PLAYBOOK_DIR: name: playbook dir override for non-playbook CLIs (ala --playbook-dir) version_added: "2.9" description: - A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it. env: [{name: ANSIBLE_PLAYBOOK_DIR}] ini: [{key: playbook_dir, section: defaults}] type: path PLAYBOOK_VARS_ROOT: name: playbook vars files root default: top version_added: "2.4.1" description: - This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars - The ``top`` option follows the traditional behaviour of using the top playbook in the chain to find the root directory. - The ``bottom`` option follows the 2.4.0 behaviour of using the current playbook to find the root directory. - The ``all`` option examines from the first parent to the current playbook. env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}] ini: - {key: playbook_vars_root, section: defaults} choices: [ top, bottom, all ] PLUGIN_FILTERS_CFG: name: Config file for limiting valid plugins default: null version_added: "2.5.0" description: - "A path to configuration for filtering which plugins installed on the system are allowed to be used." - "See :ref:`plugin_filtering_config` for details of the filter file's format." - " The default is /etc/ansible/plugin_filters.yml" ini: - key: plugin_filters_cfg section: default deprecated: why: Specifying "plugin_filters_cfg" under the "default" section is deprecated version: "2.12" alternatives: the "defaults" section instead - key: plugin_filters_cfg section: defaults type: path PYTHON_MODULE_RLIMIT_NOFILE: name: Adjust maximum file descriptor soft limit during Python module execution description: - Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default value of 0 does not attempt to adjust existing system-defined limits. default: 0 env: - {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE} ini: - {key: python_module_rlimit_nofile, section: defaults} vars: - {name: ansible_python_module_rlimit_nofile} version_added: '2.8' RETRY_FILES_ENABLED: name: Retry files default: False description: This controls whether a failed Ansible playbook should create a .retry file. env: [{name: ANSIBLE_RETRY_FILES_ENABLED}] ini: - {key: retry_files_enabled, section: defaults} type: bool RETRY_FILES_SAVE_PATH: name: Retry files path default: ~ description: - This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled. - This file will be overwritten after each run with the list of failed hosts from all plays. env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}] ini: - {key: retry_files_save_path, section: defaults} type: path RUN_VARS_PLUGINS: name: When should vars plugins run relative to inventory default: demand description: - This setting can be used to optimize vars_plugin usage depending on user's inventory size and play selection. - Setting to C(demand) will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks. - Setting to C(start) will run vars_plugins relative to inventory sources after importing that inventory source. env: [{name: ANSIBLE_RUN_VARS_PLUGINS}] ini: - {key: run_vars_plugins, section: defaults} type: str choices: ['demand', 'start'] version_added: "2.10" SHOW_CUSTOM_STATS: name: Display custom stats default: False description: 'This adds the custom stats set via the set_stats plugin to the default output' env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}] ini: - {key: show_custom_stats, section: defaults} type: bool STRING_TYPE_FILTERS: name: Filters to preserve strings default: [string, to_json, to_nice_json, to_yaml, ppretty, json] description: - "This list of filters avoids 'type conversion' when templating variables" - Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example. env: [{name: ANSIBLE_STRING_TYPE_FILTERS}] ini: - {key: dont_type_filters, section: jinja2} type: list SYSTEM_WARNINGS: name: System warnings default: True description: - Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts) - These may include warnings about 3rd party packages or other conditions that should be resolved if possible. env: [{name: ANSIBLE_SYSTEM_WARNINGS}] ini: - {key: system_warnings, section: defaults} type: boolean TAGS_RUN: name: Run Tags default: [] type: list description: default list of tags to run in your plays, Skip Tags has precedence. env: [{name: ANSIBLE_RUN_TAGS}] ini: - {key: run, section: tags} version_added: "2.5" TAGS_SKIP: name: Skip Tags default: [] type: list description: default list of tags to skip in your plays, has precedence over Run Tags env: [{name: ANSIBLE_SKIP_TAGS}] ini: - {key: skip, section: tags} version_added: "2.5" TASK_TIMEOUT: name: Task Timeout default: 0 description: - Set the maximum time (in seconds) that a task can run for. - If set to 0 (the default) there is no timeout. env: [{name: ANSIBLE_TASK_TIMEOUT}] ini: - {key: task_timeout, section: defaults} type: integer version_added: '2.10' WORKER_SHUTDOWN_POLL_COUNT: name: Worker Shutdown Poll Count default: 0 description: - The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly. - After this limit is reached any worker processes still running will be terminated. - This is for internal use only. env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}] type: integer version_added: '2.10' WORKER_SHUTDOWN_POLL_DELAY: name: Worker Shutdown Poll Delay default: 0.1 description: - The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly. - This is for internal use only. env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}] type: float version_added: '2.10' USE_PERSISTENT_CONNECTIONS: name: Persistence default: False description: Toggles the use of persistence for connections. env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}] ini: - {key: use_persistent_connections, section: defaults} type: boolean VARIABLE_PLUGINS_ENABLED: name: Vars plugin whitelist default: ['host_group_vars'] description: Whitelist for variable plugins that require it. env: [{name: ANSIBLE_VARS_ENABLED}] ini: - {key: vars_plugins_enabled, section: defaults} type: list version_added: "2.10" VARIABLE_PRECEDENCE: name: Group variable precedence default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play'] description: Allows to change the group variable precedence merge order. env: [{name: ANSIBLE_PRECEDENCE}] ini: - {key: precedence, section: defaults} type: list version_added: "2.4" WIN_ASYNC_STARTUP_TIMEOUT: name: Windows Async Startup Timeout default: 5 description: - For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling), this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load. - This is not the total time an async command can run for, but is a separate timeout to wait for an async command to start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the overall maximum duration the task can take will be extended by the amount specified here. env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}] ini: - {key: win_async_startup_timeout, section: defaults} type: integer vars: - {name: ansible_win_async_startup_timeout} version_added: '2.10' YAML_FILENAME_EXTENSIONS: name: Valid YAML extensions default: [".yml", ".yaml", ".json"] description: - "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these." - 'This affects vars_files, include_vars, inventory and vars plugins among others.' env: - name: ANSIBLE_YAML_FILENAME_EXT ini: - section: defaults key: yaml_valid_extensions type: list NETCONF_SSH_CONFIG: description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump host ssh settings should be present in ~/.ssh/config file, alternatively it can be set to custom ssh configuration file path to read the bastion/jump host settings. env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}] ini: - {key: ssh_config, section: netconf_connection} yaml: {key: netconf_connection.ssh_config} default: null STRING_CONVERSION_ACTION: version_added: '2.8' description: - Action to take when a module parameter value is converted to a string (this does not affect variables). For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc. will be converted by the YAML parser unless fully quoted. - Valid options are 'error', 'warn', and 'ignore'. - Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12. default: 'warn' env: - name: ANSIBLE_STRING_CONVERSION_ACTION ini: - section: defaults key: string_conversion_action type: string VERBOSE_TO_STDERR: version_added: '2.8' description: - Force 'verbose' option to use stderr instead of stdout default: False env: - name: ANSIBLE_VERBOSE_TO_STDERR ini: - section: defaults key: verbose_to_stderr type: bool ...
closed
ansible/ansible
https://github.com/ansible/ansible
70,831
keyed_groups with native types recasts integer strings back to integers, errors
##### SUMMARY Example here is AWS account ID, which is clearly an integer-like thing. The keyed_group templating logic will error if it gets a type other than a string... but sometime after Ansible 2.9, it seems that I _can't_ cast it to a string ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lib/ansible/plugins/inventory/__init__.py ##### ANSIBLE VERSION ``` $ ansible --version [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible 2.11.0.dev0 config file = None configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/awx_collection/bin/ansible python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] ``` ##### CONFIGURATION Defaults ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE Inventory file `aws_ec2.yml` ```yaml compose: ec2_account_id: owner_id keyed_groups: - key: ec2_account_id | string parent_group: accounts prefix: '' separator: '' plugin: amazon.aws.aws_ec2 ``` Command: ``` AWS_ACCESS_KEY_ID=<redacted> AWS_SECRET_ACCESS_KEY=<redacted> ANSIBLE_JINJA2_NATIVE=True ansible-inventory -i testing/awx_424_0bzcwopc/aws_ec2.yml --list --export -vvv ``` ##### EXPECTED RESULTS Behavior in 2.9 is that it gives JSON data for the inventory. I think it also needs to be stated that input variables look like this: ``` "ec2_account_id": "123456789", "owner_id": "123456789", ``` ##### ACTUAL RESULTS ``` [WARNING]: * Failed to parse /Users/alancoding/Documents/tower/testing/awx_424_0bzcwopc/aws_ec2.yml with auto plugin: Invalid group name format, expected a string or a list of them or dictionary, got: <class 'int'> File "/Users/alancoding/Documents/repos/ansible/lib/ansible/inventory/manager.py", line 289, in parse_source plugin.parse(self._inventory, self._loader, source, cache=cache) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/inventory/auto.py", line 58, in parse plugin.parse(inventory, loader, path, cache=cache) File "/Users/alancoding/.ansible/collections/ansible_collections/amazon/aws/plugins/inventory/aws_ec2.py", line 644, in parse self._populate(results, hostnames) File "/Users/alancoding/.ansible/collections/ansible_collections/amazon/aws/plugins/inventory/aws_ec2.py", line 534, in _populate self._add_hosts(hosts=groups[group], group=group, hostnames=hostnames) File "/Users/alancoding/.ansible/collections/ansible_collections/amazon/aws/plugins/inventory/aws_ec2.py", line 569, in _add_hosts self._add_host_to_keyed_groups(self.get_option('keyed_groups'), host, hostname, strict=strict) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/inventory/__init__.py", line 431, in _add_host_to_keyed_groups raise AnsibleParserError("Invalid group name format, expected a string or a list of them or dictionary, got: %s" % type(key)) ``` No matter what I do, it doesn't seem to use this as a string. Presumably, this may be something that can be reproduced with the constructed inventory plugin by itself, but I have not gotten that far yet. So something is forcing this into an integer, and it wasn't there is 2.9. That's about the extent of what I know at this point.
https://github.com/ansible/ansible/issues/70831
https://github.com/ansible/ansible/pull/70988
7195788ffe8ad63c8d9e36f2bc896c96ddfdaa49
b66d66027ece03f3f0a3fdb5fd6b8213965a2f1d
2020-07-23T03:01:56Z
python
2020-08-11T08:19:49Z
lib/ansible/template/__init__.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import ast import datetime import os import pkgutil import pwd import re import time from contextlib import contextmanager from distutils.version import LooseVersion from numbers import Number from traceback import format_exc try: from hashlib import sha1 except ImportError: from sha import sha as sha1 from jinja2.exceptions import TemplateSyntaxError, UndefinedError from jinja2.loaders import FileSystemLoader from jinja2.runtime import Context, StrictUndefined from ansible import constants as C from ansible.errors import AnsibleError, AnsibleFilterError, AnsiblePluginRemovedError, AnsibleUndefinedVariable, AnsibleAssertionError from ansible.module_utils.six import iteritems, string_types, text_type from ansible.module_utils.six.moves import range from ansible.module_utils._text import to_native, to_text, to_bytes from ansible.module_utils.common._collections_compat import Iterator, Sequence, Mapping, MappingView, MutableMapping from ansible.module_utils.common.collections import is_sequence from ansible.module_utils.compat.importlib import import_module from ansible.plugins.loader import filter_loader, lookup_loader, test_loader from ansible.template.safe_eval import safe_eval from ansible.template.template import AnsibleJ2Template from ansible.template.vars import AnsibleJ2Vars from ansible.utils.collection_loader import AnsibleCollectionRef from ansible.utils.display import Display from ansible.utils.collection_loader._collection_finder import _get_collection_metadata from ansible.utils.unsafe_proxy import wrap_var display = Display() __all__ = ['Templar', 'generate_ansible_template_vars'] # A regex for checking to see if a variable we're trying to # expand is just a single variable name. # Primitive Types which we don't want Jinja to convert to strings. NON_TEMPLATED_TYPES = (bool, Number) JINJA2_OVERRIDE = '#jinja2:' from jinja2 import __version__ as j2_version USE_JINJA2_NATIVE = False if C.DEFAULT_JINJA2_NATIVE: try: from jinja2.nativetypes import NativeEnvironment as Environment from ansible.template.native_helpers import ansible_native_concat as j2_concat USE_JINJA2_NATIVE = True except ImportError: from jinja2 import Environment from jinja2.utils import concat as j2_concat display.warning( 'jinja2_native requires Jinja 2.10 and above. ' 'Version detected: %s. Falling back to default.' % j2_version ) else: from jinja2 import Environment from jinja2.utils import concat as j2_concat JINJA2_BEGIN_TOKENS = frozenset(('variable_begin', 'block_begin', 'comment_begin', 'raw_begin')) JINJA2_END_TOKENS = frozenset(('variable_end', 'block_end', 'comment_end', 'raw_end')) RANGE_TYPE = type(range(0)) def generate_ansible_template_vars(path, dest_path=None): b_path = to_bytes(path) try: template_uid = pwd.getpwuid(os.stat(b_path).st_uid).pw_name except (KeyError, TypeError): template_uid = os.stat(b_path).st_uid temp_vars = { 'template_host': to_text(os.uname()[1]), 'template_path': path, 'template_mtime': datetime.datetime.fromtimestamp(os.path.getmtime(b_path)), 'template_uid': to_text(template_uid), 'template_fullpath': os.path.abspath(path), 'template_run_date': datetime.datetime.now(), 'template_destpath': to_native(dest_path) if dest_path else None, } managed_default = C.DEFAULT_MANAGED_STR managed_str = managed_default.format( host=temp_vars['template_host'], uid=temp_vars['template_uid'], file=temp_vars['template_path'], ) temp_vars['ansible_managed'] = to_text(time.strftime(to_native(managed_str), time.localtime(os.path.getmtime(b_path)))) return temp_vars def _escape_backslashes(data, jinja_env): """Double backslashes within jinja2 expressions A user may enter something like this in a playbook:: debug: msg: "Test Case 1\\3; {{ test1_name | regex_replace('^(.*)_name$', '\\1')}}" The string inside of the {{ gets interpreted multiple times First by yaml. Then by python. And finally by jinja2 as part of it's variable. Because it is processed by both python and jinja2, the backslash escaped characters get unescaped twice. This means that we'd normally have to use four backslashes to escape that. This is painful for playbook authors as they have to remember different rules for inside vs outside of a jinja2 expression (The backslashes outside of the "{{ }}" only get processed by yaml and python. So they only need to be escaped once). The following code fixes this by automatically performing the extra quoting of backslashes inside of a jinja2 expression. """ if '\\' in data and '{{' in data: new_data = [] d2 = jinja_env.preprocess(data) in_var = False for token in jinja_env.lex(d2): if token[1] == 'variable_begin': in_var = True new_data.append(token[2]) elif token[1] == 'variable_end': in_var = False new_data.append(token[2]) elif in_var and token[1] == 'string': # Double backslashes only if we're inside of a jinja2 variable new_data.append(token[2].replace('\\', '\\\\')) else: new_data.append(token[2]) data = ''.join(new_data) return data def is_template(data, jinja_env): """This function attempts to quickly detect whether a value is a jinja2 template. To do so, we look for the first 2 matching jinja2 tokens for start and end delimiters. """ found = None start = True comment = False d2 = jinja_env.preprocess(data) # This wraps a lot of code, but this is due to lex returning a generator # so we may get an exception at any part of the loop try: for token in jinja_env.lex(d2): if token[1] in JINJA2_BEGIN_TOKENS: if start and token[1] == 'comment_begin': # Comments can wrap other token types comment = True start = False # Example: variable_end -> variable found = token[1].split('_')[0] elif token[1] in JINJA2_END_TOKENS: if token[1].split('_')[0] == found: return True elif comment: continue return False except TemplateSyntaxError: return False return False def _count_newlines_from_end(in_str): ''' Counts the number of newlines at the end of a string. This is used during the jinja2 templating to ensure the count matches the input, since some newlines may be thrown away during the templating. ''' try: i = len(in_str) j = i - 1 while in_str[j] == '\n': j -= 1 return i - 1 - j except IndexError: # Uncommon cases: zero length string and string containing only newlines return i def recursive_check_defined(item): from jinja2.runtime import Undefined if isinstance(item, MutableMapping): for key in item: recursive_check_defined(item[key]) elif isinstance(item, list): for i in item: recursive_check_defined(i) else: if isinstance(item, Undefined): raise AnsibleFilterError("{0} is undefined".format(item)) def _is_rolled(value): """Helper method to determine if something is an unrolled generator, iterator, or similar object """ return ( isinstance(value, Iterator) or isinstance(value, MappingView) or isinstance(value, RANGE_TYPE) ) def _unroll_iterator(func): """Wrapper function, that intercepts the result of a filter and auto unrolls a generator, so that users are not required to explicitly use ``|list`` to unroll. """ def wrapper(*args, **kwargs): ret = func(*args, **kwargs) if _is_rolled(ret): return list(ret) return ret # This code is duplicated from ``functools.update_wrapper`` from Py3.7. # ``functools.update_wrapper`` was failing when the func was ``functools.partial`` for attr in ('__module__', '__name__', '__qualname__', '__doc__', '__annotations__'): try: value = getattr(func, attr) except AttributeError: pass else: setattr(wrapper, attr, value) for attr in ('__dict__',): getattr(wrapper, attr).update(getattr(func, attr, {})) wrapper.__wrapped__ = func return wrapper class AnsibleUndefined(StrictUndefined): ''' A custom Undefined class, which returns further Undefined objects on access, rather than throwing an exception. ''' def __getattr__(self, name): if name == '__UNSAFE__': # AnsibleUndefined should never be assumed to be unsafe # This prevents ``hasattr(val, '__UNSAFE__')`` from evaluating to ``True`` raise AttributeError(name) # Return original Undefined object to preserve the first failure context return self def __getitem__(self, key): # Return original Undefined object to preserve the first failure context return self def __repr__(self): return 'AnsibleUndefined' def __contains__(self, item): # Return original Undefined object to preserve the first failure context return self class AnsibleContext(Context): ''' A custom context, which intercepts resolve() calls and sets a flag internally if any variable lookup returns an AnsibleUnsafe value. This flag is checked post-templating, and (when set) will result in the final templated result being wrapped in AnsibleUnsafe. ''' def __init__(self, *args, **kwargs): super(AnsibleContext, self).__init__(*args, **kwargs) self.unsafe = False def _is_unsafe(self, val): ''' Our helper function, which will also recursively check dict and list entries due to the fact that they may be repr'd and contain a key or value which contains jinja2 syntax and would otherwise lose the AnsibleUnsafe value. ''' if isinstance(val, dict): for key in val.keys(): if self._is_unsafe(val[key]): return True elif isinstance(val, list): for item in val: if self._is_unsafe(item): return True elif getattr(val, '__UNSAFE__', False) is True: return True return False def _update_unsafe(self, val): if val is not None and not self.unsafe and self._is_unsafe(val): self.unsafe = True def resolve(self, key): ''' The intercepted resolve(), which uses the helper above to set the internal flag whenever an unsafe variable value is returned. ''' val = super(AnsibleContext, self).resolve(key) self._update_unsafe(val) return val def resolve_or_missing(self, key): val = super(AnsibleContext, self).resolve_or_missing(key) self._update_unsafe(val) return val def get_all(self): """Return the complete context as a dict including the exported variables. For optimizations reasons this might not return an actual copy so be careful with using it. This is to prevent from running ``AnsibleJ2Vars`` through dict(): ``dict(self.parent, **self.vars)`` In Ansible this means that ALL variables would be templated in the process of re-creating the parent because ``AnsibleJ2Vars`` templates each variable in its ``__getitem__`` method. Instead we re-create the parent via ``AnsibleJ2Vars.add_locals`` that creates a new ``AnsibleJ2Vars`` copy without templating each variable. This will prevent unnecessarily templating unused variables in cases like setting a local variable and passing it to {% include %} in a template. Also see ``AnsibleJ2Template``and https://github.com/pallets/jinja/commit/d67f0fd4cc2a4af08f51f4466150d49da7798729 """ if LooseVersion(j2_version) >= LooseVersion('2.9'): if not self.vars: return self.parent if not self.parent: return self.vars if isinstance(self.parent, AnsibleJ2Vars): return self.parent.add_locals(self.vars) else: # can this happen in Ansible? return dict(self.parent, **self.vars) class JinjaPluginIntercept(MutableMapping): def __init__(self, delegatee, pluginloader, *args, **kwargs): super(JinjaPluginIntercept, self).__init__(*args, **kwargs) self._delegatee = delegatee self._pluginloader = pluginloader if self._pluginloader.class_name == 'FilterModule': self._method_map_name = 'filters' self._dirname = 'filter' elif self._pluginloader.class_name == 'TestModule': self._method_map_name = 'tests' self._dirname = 'test' self._collection_jinja_func_cache = {} # FUTURE: we can cache FQ filter/test calls for the entire duration of a run, since a given collection's impl's # aren't supposed to change during a run def __getitem__(self, key): try: if not isinstance(key, string_types): raise ValueError('key must be a string') key = to_native(key) if '.' not in key: # might be a built-in or legacy, check the delegatee dict first, then try for a last-chance base redirect func = self._delegatee.get(key) if func: return func # didn't find it in the pre-built Jinja env, assume it's a former builtin and follow the normal routing path leaf_key = key key = 'ansible.builtin.' + key else: leaf_key = key.split('.')[-1] acr = AnsibleCollectionRef.try_parse_fqcr(key, self._dirname) if not acr: raise KeyError('invalid plugin name: {0}'.format(key)) ts = _get_collection_metadata(acr.collection) # TODO: implement support for collection-backed redirect (currently only builtin) # TODO: implement cycle detection (unified across collection redir as well) routing_entry = ts.get('plugin_routing', {}).get(self._dirname, {}).get(leaf_key, {}) deprecation_entry = routing_entry.get('deprecation') if deprecation_entry: warning_text = deprecation_entry.get('warning_text') removal_date = deprecation_entry.get('removal_date') removal_version = deprecation_entry.get('removal_version') if not warning_text: warning_text = '{0} "{1}" is deprecated'.format(self._dirname, key) display.deprecated(warning_text, version=removal_version, date=removal_date, collection_name=acr.collection) tombstone_entry = routing_entry.get('tombstone') if tombstone_entry: warning_text = tombstone_entry.get('warning_text') removal_date = tombstone_entry.get('removal_date') removal_version = tombstone_entry.get('removal_version') if not warning_text: warning_text = '{0} "{1}" has been removed'.format(self._dirname, key) exc_msg = display.get_deprecation_message(warning_text, version=removal_version, date=removal_date, collection_name=acr.collection, removed=True) raise AnsiblePluginRemovedError(exc_msg) redirect_fqcr = routing_entry.get('redirect', None) if redirect_fqcr: acr = AnsibleCollectionRef.from_fqcr(ref=redirect_fqcr, ref_type=self._dirname) display.vvv('redirecting {0} {1} to {2}.{3}'.format(self._dirname, key, acr.collection, acr.resource)) key = redirect_fqcr # TODO: handle recursive forwarding (not necessary for builtin, but definitely for further collection redirs) func = self._collection_jinja_func_cache.get(key) if func: return func try: pkg = import_module(acr.n_python_package_name) except ImportError: raise KeyError() parent_prefix = acr.collection if acr.subdirs: parent_prefix = '{0}.{1}'.format(parent_prefix, acr.subdirs) # TODO: implement collection-level redirect for dummy, module_name, ispkg in pkgutil.iter_modules(pkg.__path__, prefix=parent_prefix + '.'): if ispkg: continue try: plugin_impl = self._pluginloader.get(module_name) except Exception as e: raise TemplateSyntaxError(to_native(e), 0) method_map = getattr(plugin_impl, self._method_map_name) for f in iteritems(method_map()): fq_name = '.'.join((parent_prefix, f[0])) # FIXME: detect/warn on intra-collection function name collisions self._collection_jinja_func_cache[fq_name] = _unroll_iterator(f[1]) function_impl = self._collection_jinja_func_cache[key] return function_impl except AnsiblePluginRemovedError as apre: raise TemplateSyntaxError(to_native(apre), 0) except KeyError: raise except Exception as ex: display.warning('an unexpected error occurred during Jinja2 environment setup: {0}'.format(to_native(ex))) display.vvv('exception during Jinja2 environment setup: {0}'.format(format_exc())) raise TemplateSyntaxError(to_native(ex), 0) def __setitem__(self, key, value): return self._delegatee.__setitem__(key, value) def __delitem__(self, key): raise NotImplementedError() def __iter__(self): # not strictly accurate since we're not counting dynamically-loaded values return iter(self._delegatee) def __len__(self): # not strictly accurate since we're not counting dynamically-loaded values return len(self._delegatee) class AnsibleEnvironment(Environment): ''' Our custom environment, which simply allows us to override the class-level values for the Template and Context classes used by jinja2 internally. ''' context_class = AnsibleContext template_class = AnsibleJ2Template def __init__(self, *args, **kwargs): super(AnsibleEnvironment, self).__init__(*args, **kwargs) self.filters = JinjaPluginIntercept(self.filters, filter_loader) self.tests = JinjaPluginIntercept(self.tests, test_loader) class Templar: ''' The main class for templating, with the main entry-point of template(). ''' def __init__(self, loader, shared_loader_obj=None, variables=None): variables = {} if variables is None else variables self._loader = loader self._filters = None self._tests = None self._available_variables = variables self._cached_result = {} if loader: self._basedir = loader.get_basedir() else: self._basedir = './' if shared_loader_obj: self._filter_loader = getattr(shared_loader_obj, 'filter_loader') self._test_loader = getattr(shared_loader_obj, 'test_loader') self._lookup_loader = getattr(shared_loader_obj, 'lookup_loader') else: self._filter_loader = filter_loader self._test_loader = test_loader self._lookup_loader = lookup_loader # flags to determine whether certain failures during templating # should result in fatal errors being raised self._fail_on_lookup_errors = True self._fail_on_filter_errors = True self._fail_on_undefined_errors = C.DEFAULT_UNDEFINED_VAR_BEHAVIOR self.environment = AnsibleEnvironment( trim_blocks=True, undefined=AnsibleUndefined, extensions=self._get_extensions(), finalize=self._finalize, loader=FileSystemLoader(self._basedir), ) # jinja2 global is inconsistent across versions, this normalizes them self.environment.globals['dict'] = dict # Custom globals self.environment.globals['lookup'] = self._lookup self.environment.globals['query'] = self.environment.globals['q'] = self._query_lookup self.environment.globals['now'] = self._now_datetime self.environment.globals['finalize'] = self._finalize # the current rendering context under which the templar class is working self.cur_context = None self.SINGLE_VAR = re.compile(r"^%s\s*(\w*)\s*%s$" % (self.environment.variable_start_string, self.environment.variable_end_string)) self._clean_regex = re.compile(r'(?:%s|%s|%s|%s)' % ( self.environment.variable_start_string, self.environment.block_start_string, self.environment.block_end_string, self.environment.variable_end_string )) self._no_type_regex = re.compile(r'.*?\|\s*(?:%s)(?:\([^\|]*\))?\s*\)?\s*(?:%s)' % ('|'.join(C.STRING_TYPE_FILTERS), self.environment.variable_end_string)) def _get_filters(self): ''' Returns filter plugins, after loading and caching them if need be ''' if self._filters is not None: return self._filters.copy() self._filters = dict() for fp in self._filter_loader.all(): self._filters.update(fp.filters()) return self._filters.copy() def _get_tests(self): ''' Returns tests plugins, after loading and caching them if need be ''' if self._tests is not None: return self._tests.copy() self._tests = dict() for fp in self._test_loader.all(): self._tests.update(fp.tests()) return self._tests.copy() def _get_extensions(self): ''' Return jinja2 extensions to load. If some extensions are set via jinja_extensions in ansible.cfg, we try to load them with the jinja environment. ''' jinja_exts = [] if C.DEFAULT_JINJA2_EXTENSIONS: # make sure the configuration directive doesn't contain spaces # and split extensions in an array jinja_exts = C.DEFAULT_JINJA2_EXTENSIONS.replace(" ", "").split(',') return jinja_exts @property def available_variables(self): return self._available_variables @available_variables.setter def available_variables(self, variables): ''' Sets the list of template variables this Templar instance will use to template things, so we don't have to pass them around between internal methods. We also clear the template cache here, as the variables are being changed. ''' if not isinstance(variables, Mapping): raise AnsibleAssertionError("the type of 'variables' should be a Mapping but was a %s" % (type(variables))) self._available_variables = variables self._cached_result = {} def set_available_variables(self, variables): display.deprecated( 'set_available_variables is being deprecated. Use "@available_variables.setter" instead.', version='2.13', collection_name='ansible.builtin' ) self.available_variables = variables @contextmanager def set_temporary_context(self, **kwargs): """Context manager used to set temporary templating context, without having to worry about resetting original values afterward Use a keyword that maps to the attr you are setting. Applies to ``self.environment`` by default, to set context on another object, it must be in ``mapping``. """ mapping = { 'available_variables': self, 'searchpath': self.environment.loader, } original = {} for key, value in kwargs.items(): obj = mapping.get(key, self.environment) try: original[key] = getattr(obj, key) if value is not None: setattr(obj, key, value) except AttributeError: # Ignore invalid attrs, lstrip_blocks was added in jinja2==2.7 pass yield for key in original: obj = mapping.get(key, self.environment) setattr(obj, key, original[key]) def template(self, variable, convert_bare=False, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None, convert_data=True, static_vars=None, cache=True, disable_lookups=False): ''' Templates (possibly recursively) any given data as input. If convert_bare is set to True, the given data will be wrapped as a jinja2 variable ('{{foo}}') before being sent through the template engine. ''' static_vars = [''] if static_vars is None else static_vars # Don't template unsafe variables, just return them. if hasattr(variable, '__UNSAFE__'): return variable if fail_on_undefined is None: fail_on_undefined = self._fail_on_undefined_errors try: if convert_bare: variable = self._convert_bare_variable(variable) if isinstance(variable, string_types): result = variable if self.is_possibly_template(variable): # Check to see if the string we are trying to render is just referencing a single # var. In this case we don't want to accidentally change the type of the variable # to a string by using the jinja template renderer. We just want to pass it. only_one = self.SINGLE_VAR.match(variable) if only_one: var_name = only_one.group(1) if var_name in self._available_variables: resolved_val = self._available_variables[var_name] if isinstance(resolved_val, NON_TEMPLATED_TYPES): return resolved_val elif resolved_val is None: return C.DEFAULT_NULL_REPRESENTATION # Using a cache in order to prevent template calls with already templated variables sha1_hash = None if cache: variable_hash = sha1(text_type(variable).encode('utf-8')) options_hash = sha1( ( text_type(preserve_trailing_newlines) + text_type(escape_backslashes) + text_type(fail_on_undefined) + text_type(overrides) ).encode('utf-8') ) sha1_hash = variable_hash.hexdigest() + options_hash.hexdigest() if cache and sha1_hash in self._cached_result: result = self._cached_result[sha1_hash] else: result = self.do_template( variable, preserve_trailing_newlines=preserve_trailing_newlines, escape_backslashes=escape_backslashes, fail_on_undefined=fail_on_undefined, overrides=overrides, disable_lookups=disable_lookups, ) if not USE_JINJA2_NATIVE: unsafe = hasattr(result, '__UNSAFE__') if convert_data and not self._no_type_regex.match(variable): # if this looks like a dictionary or list, convert it to such using the safe_eval method if (result.startswith("{") and not result.startswith(self.environment.variable_start_string)) or \ result.startswith("[") or result in ("True", "False"): eval_results = safe_eval(result, include_exceptions=True) if eval_results[1] is None: result = eval_results[0] if unsafe: result = wrap_var(result) else: # FIXME: if the safe_eval raised an error, should we do something with it? pass # we only cache in the case where we have a single variable # name, to make sure we're not putting things which may otherwise # be dynamic in the cache (filters, lookups, etc.) if cache and only_one: self._cached_result[sha1_hash] = result return result elif is_sequence(variable): return [self.template( v, preserve_trailing_newlines=preserve_trailing_newlines, fail_on_undefined=fail_on_undefined, overrides=overrides, disable_lookups=disable_lookups, ) for v in variable] elif isinstance(variable, Mapping): d = {} # we don't use iteritems() here to avoid problems if the underlying dict # changes sizes due to the templating, which can happen with hostvars for k in variable.keys(): if k not in static_vars: d[k] = self.template( variable[k], preserve_trailing_newlines=preserve_trailing_newlines, fail_on_undefined=fail_on_undefined, overrides=overrides, disable_lookups=disable_lookups, ) else: d[k] = variable[k] return d else: return variable except AnsibleFilterError: if self._fail_on_filter_errors: raise else: return variable def is_template(self, data): '''lets us know if data has a template''' if isinstance(data, string_types): return is_template(data, self.environment) elif isinstance(data, (list, tuple)): for v in data: if self.is_template(v): return True elif isinstance(data, dict): for k in data: if self.is_template(k) or self.is_template(data[k]): return True return False templatable = is_template def is_possibly_template(self, data): '''Determines if a string looks like a template, by seeing if it contains a jinja2 start delimiter. Does not guarantee that the string is actually a template. This is different than ``is_template`` which is more strict. This method may return ``True`` on a string that is not templatable. Useful when guarding passing a string for templating, but when you want to allow the templating engine to make the final assessment which may result in ``TemplateSyntaxError``. ''' env = self.environment if isinstance(data, string_types): for marker in (env.block_start_string, env.variable_start_string, env.comment_start_string): if marker in data: return True return False def _convert_bare_variable(self, variable): ''' Wraps a bare string, which may have an attribute portion (ie. foo.bar) in jinja2 variable braces so that it is evaluated properly. ''' if isinstance(variable, string_types): contains_filters = "|" in variable first_part = variable.split("|")[0].split(".")[0].split("[")[0] if (contains_filters or first_part in self._available_variables) and self.environment.variable_start_string not in variable: return "%s%s%s" % (self.environment.variable_start_string, variable, self.environment.variable_end_string) # the variable didn't meet the conditions to be converted, # so just return it as-is return variable def _finalize(self, thing): ''' A custom finalize method for jinja2, which prevents None from being returned. This avoids a string of ``"None"`` as ``None`` has no importance in YAML. If using ANSIBLE_JINJA2_NATIVE we bypass this and return the actual value always ''' if _is_rolled(thing): # Auto unroll a generator, so that users are not required to # explicitly use ``|list`` to unroll # This only affects the scenario where the final result of templating # is a generator, and not where a filter creates a generator in the middle # of a template. See ``_unroll_iterator`` for the other case. This is probably # unncessary return list(thing) if USE_JINJA2_NATIVE: return thing return thing if thing is not None else '' def _fail_lookup(self, name, *args, **kwargs): raise AnsibleError("The lookup `%s` was found, however lookups were disabled from templating" % name) def _now_datetime(self, utc=False, fmt=None): '''jinja2 global function to return current datetime, potentially formatted via strftime''' if utc: now = datetime.datetime.utcnow() else: now = datetime.datetime.now() if fmt: return now.strftime(fmt) return now def _query_lookup(self, name, *args, **kwargs): ''' wrapper for lookup, force wantlist true''' kwargs['wantlist'] = True return self._lookup(name, *args, **kwargs) def _lookup(self, name, *args, **kwargs): instance = self._lookup_loader.get(name, loader=self._loader, templar=self) if instance is not None: wantlist = kwargs.pop('wantlist', False) allow_unsafe = kwargs.pop('allow_unsafe', C.DEFAULT_ALLOW_UNSAFE_LOOKUPS) errors = kwargs.pop('errors', 'strict') from ansible.utils.listify import listify_lookup_plugin_terms loop_terms = listify_lookup_plugin_terms(terms=args, templar=self, loader=self._loader, fail_on_undefined=True, convert_bare=False) # safely catch run failures per #5059 try: ran = instance.run(loop_terms, variables=self._available_variables, **kwargs) except (AnsibleUndefinedVariable, UndefinedError) as e: raise AnsibleUndefinedVariable(e) except Exception as e: if self._fail_on_lookup_errors: msg = u"An unhandled exception occurred while running the lookup plugin '%s'. Error was a %s, original message: %s" % \ (name, type(e), to_text(e)) if errors == 'warn': display.warning(msg) elif errors == 'ignore': display.display(msg, log_only=True) else: raise AnsibleError(to_native(msg)) ran = [] if wantlist else None if ran and not allow_unsafe: if wantlist: ran = wrap_var(ran) else: try: ran = wrap_var(",".join(ran)) except TypeError: # Lookup Plugins should always return lists. Throw an error if that's not # the case: if not isinstance(ran, Sequence): raise AnsibleError("The lookup plugin '%s' did not return a list." % name) # The TypeError we can recover from is when the value *inside* of the list # is not a string if len(ran) == 1: ran = wrap_var(ran[0]) else: ran = wrap_var(ran) if self.cur_context: self.cur_context.unsafe = True return ran else: raise AnsibleError("lookup plugin (%s) not found" % name) def do_template(self, data, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None, disable_lookups=False): if USE_JINJA2_NATIVE and not isinstance(data, string_types): return data # For preserving the number of input newlines in the output (used # later in this method) data_newlines = _count_newlines_from_end(data) if fail_on_undefined is None: fail_on_undefined = self._fail_on_undefined_errors try: # allows template header overrides to change jinja2 options. if overrides is None: myenv = self.environment.overlay() else: myenv = self.environment.overlay(overrides) # Get jinja env overrides from template if hasattr(data, 'startswith') and data.startswith(JINJA2_OVERRIDE): eol = data.find('\n') line = data[len(JINJA2_OVERRIDE):eol] data = data[eol + 1:] for pair in line.split(','): (key, val) = pair.split(':') key = key.strip() setattr(myenv, key, ast.literal_eval(val.strip())) # Adds Ansible custom filters and tests myenv.filters.update(self._get_filters()) for k in myenv.filters: myenv.filters[k] = _unroll_iterator(myenv.filters[k]) myenv.tests.update(self._get_tests()) if escape_backslashes: # Allow users to specify backslashes in playbooks as "\\" instead of as "\\\\". data = _escape_backslashes(data, myenv) try: t = myenv.from_string(data) except TemplateSyntaxError as e: raise AnsibleError("template error while templating string: %s. String: %s" % (to_native(e), to_native(data))) except Exception as e: if 'recursion' in to_native(e): raise AnsibleError("recursive loop detected in template string: %s" % to_native(data)) else: return data if disable_lookups: t.globals['query'] = t.globals['q'] = t.globals['lookup'] = self._fail_lookup jvars = AnsibleJ2Vars(self, t.globals) self.cur_context = new_context = t.new_context(jvars, shared=True) rf = t.root_render_func(new_context) try: res = j2_concat(rf) if getattr(new_context, 'unsafe', False): res = wrap_var(res) except TypeError as te: if 'AnsibleUndefined' in to_native(te): errmsg = "Unable to look up a name or access an attribute in template string (%s).\n" % to_native(data) errmsg += "Make sure your variable name does not contain invalid characters like '-': %s" % to_native(te) raise AnsibleUndefinedVariable(errmsg) else: display.debug("failing because of a type error, template data is: %s" % to_text(data)) raise AnsibleError("Unexpected templating type error occurred on (%s): %s" % (to_native(data), to_native(te))) if USE_JINJA2_NATIVE and not isinstance(res, string_types): return res if preserve_trailing_newlines: # The low level calls above do not preserve the newline # characters at the end of the input data, so we use the # calculate the difference in newlines and append them # to the resulting output for parity # # jinja2 added a keep_trailing_newline option in 2.7 when # creating an Environment. That would let us make this code # better (remove a single newline if # preserve_trailing_newlines is False). Once we can depend on # that version being present, modify our code to set that when # initializing self.environment and remove a single trailing # newline here if preserve_newlines is False. res_newlines = _count_newlines_from_end(res) if data_newlines > res_newlines: res += self.environment.newline_sequence * (data_newlines - res_newlines) return res except (UndefinedError, AnsibleUndefinedVariable) as e: if fail_on_undefined: raise AnsibleUndefinedVariable(e) else: display.debug("Ignoring undefined failure: %s" % to_text(e)) return data # for backwards compatibility in case anyone is using old private method directly _do_template = do_template
closed
ansible/ansible
https://github.com/ansible/ansible
70,831
keyed_groups with native types recasts integer strings back to integers, errors
##### SUMMARY Example here is AWS account ID, which is clearly an integer-like thing. The keyed_group templating logic will error if it gets a type other than a string... but sometime after Ansible 2.9, it seems that I _can't_ cast it to a string ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lib/ansible/plugins/inventory/__init__.py ##### ANSIBLE VERSION ``` $ ansible --version [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible 2.11.0.dev0 config file = None configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/awx_collection/bin/ansible python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] ``` ##### CONFIGURATION Defaults ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE Inventory file `aws_ec2.yml` ```yaml compose: ec2_account_id: owner_id keyed_groups: - key: ec2_account_id | string parent_group: accounts prefix: '' separator: '' plugin: amazon.aws.aws_ec2 ``` Command: ``` AWS_ACCESS_KEY_ID=<redacted> AWS_SECRET_ACCESS_KEY=<redacted> ANSIBLE_JINJA2_NATIVE=True ansible-inventory -i testing/awx_424_0bzcwopc/aws_ec2.yml --list --export -vvv ``` ##### EXPECTED RESULTS Behavior in 2.9 is that it gives JSON data for the inventory. I think it also needs to be stated that input variables look like this: ``` "ec2_account_id": "123456789", "owner_id": "123456789", ``` ##### ACTUAL RESULTS ``` [WARNING]: * Failed to parse /Users/alancoding/Documents/tower/testing/awx_424_0bzcwopc/aws_ec2.yml with auto plugin: Invalid group name format, expected a string or a list of them or dictionary, got: <class 'int'> File "/Users/alancoding/Documents/repos/ansible/lib/ansible/inventory/manager.py", line 289, in parse_source plugin.parse(self._inventory, self._loader, source, cache=cache) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/inventory/auto.py", line 58, in parse plugin.parse(inventory, loader, path, cache=cache) File "/Users/alancoding/.ansible/collections/ansible_collections/amazon/aws/plugins/inventory/aws_ec2.py", line 644, in parse self._populate(results, hostnames) File "/Users/alancoding/.ansible/collections/ansible_collections/amazon/aws/plugins/inventory/aws_ec2.py", line 534, in _populate self._add_hosts(hosts=groups[group], group=group, hostnames=hostnames) File "/Users/alancoding/.ansible/collections/ansible_collections/amazon/aws/plugins/inventory/aws_ec2.py", line 569, in _add_hosts self._add_host_to_keyed_groups(self.get_option('keyed_groups'), host, hostname, strict=strict) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/inventory/__init__.py", line 431, in _add_host_to_keyed_groups raise AnsibleParserError("Invalid group name format, expected a string or a list of them or dictionary, got: %s" % type(key)) ``` No matter what I do, it doesn't seem to use this as a string. Presumably, this may be something that can be reproduced with the constructed inventory plugin by itself, but I have not gotten that far yet. So something is forcing this into an integer, and it wasn't there is 2.9. That's about the extent of what I know at this point.
https://github.com/ansible/ansible/issues/70831
https://github.com/ansible/ansible/pull/70988
7195788ffe8ad63c8d9e36f2bc896c96ddfdaa49
b66d66027ece03f3f0a3fdb5fd6b8213965a2f1d
2020-07-23T03:01:56Z
python
2020-08-11T08:19:49Z
lib/ansible/template/native_helpers.py
# Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type from ast import literal_eval from itertools import islice, chain import types from jinja2.runtime import StrictUndefined from ansible.module_utils._text import to_text from ansible.module_utils.common.collections import is_sequence, Mapping from ansible.module_utils.common.text.converters import container_to_text from ansible.module_utils.six import PY2 from ansible.parsing.yaml.objects import AnsibleVaultEncryptedUnicode def _fail_on_undefined(data): """Recursively find an undefined value in a nested data structure and properly raise the undefined exception. """ if isinstance(data, Mapping): for value in data.values(): _fail_on_undefined(value) elif is_sequence(data): for item in data: _fail_on_undefined(item) else: if isinstance(data, StrictUndefined): # To actually raise the undefined exception we need to # access the undefined object otherwise the exception would # be raised on the next access which might not be properly # handled. # See https://github.com/ansible/ansible/issues/52158 # and StrictUndefined implementation in upstream Jinja2. str(data) return data def ansible_native_concat(nodes): """Return a native Python type from the list of compiled nodes. If the result is a single node, its value is returned. Otherwise, the nodes are concatenated as strings. If the result can be parsed with :func:`ast.literal_eval`, the parsed value is returned. Otherwise, the string is returned. https://github.com/pallets/jinja/blob/master/src/jinja2/nativetypes.py """ head = list(islice(nodes, 2)) if not head: return None if len(head) == 1: out = _fail_on_undefined(head[0]) # TODO send unvaulted data to literal_eval? if isinstance(out, AnsibleVaultEncryptedUnicode): return out.data else: if isinstance(nodes, types.GeneratorType): nodes = chain(head, nodes) out = u''.join([to_text(_fail_on_undefined(v)) for v in nodes]) try: out = literal_eval(out) if PY2: # ensure bytes are not returned back into Ansible from templating out = container_to_text(out) return out except (ValueError, SyntaxError, MemoryError): return out
closed
ansible/ansible
https://github.com/ansible/ansible
70,831
keyed_groups with native types recasts integer strings back to integers, errors
##### SUMMARY Example here is AWS account ID, which is clearly an integer-like thing. The keyed_group templating logic will error if it gets a type other than a string... but sometime after Ansible 2.9, it seems that I _can't_ cast it to a string ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lib/ansible/plugins/inventory/__init__.py ##### ANSIBLE VERSION ``` $ ansible --version [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible 2.11.0.dev0 config file = None configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/awx_collection/bin/ansible python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] ``` ##### CONFIGURATION Defaults ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE Inventory file `aws_ec2.yml` ```yaml compose: ec2_account_id: owner_id keyed_groups: - key: ec2_account_id | string parent_group: accounts prefix: '' separator: '' plugin: amazon.aws.aws_ec2 ``` Command: ``` AWS_ACCESS_KEY_ID=<redacted> AWS_SECRET_ACCESS_KEY=<redacted> ANSIBLE_JINJA2_NATIVE=True ansible-inventory -i testing/awx_424_0bzcwopc/aws_ec2.yml --list --export -vvv ``` ##### EXPECTED RESULTS Behavior in 2.9 is that it gives JSON data for the inventory. I think it also needs to be stated that input variables look like this: ``` "ec2_account_id": "123456789", "owner_id": "123456789", ``` ##### ACTUAL RESULTS ``` [WARNING]: * Failed to parse /Users/alancoding/Documents/tower/testing/awx_424_0bzcwopc/aws_ec2.yml with auto plugin: Invalid group name format, expected a string or a list of them or dictionary, got: <class 'int'> File "/Users/alancoding/Documents/repos/ansible/lib/ansible/inventory/manager.py", line 289, in parse_source plugin.parse(self._inventory, self._loader, source, cache=cache) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/inventory/auto.py", line 58, in parse plugin.parse(inventory, loader, path, cache=cache) File "/Users/alancoding/.ansible/collections/ansible_collections/amazon/aws/plugins/inventory/aws_ec2.py", line 644, in parse self._populate(results, hostnames) File "/Users/alancoding/.ansible/collections/ansible_collections/amazon/aws/plugins/inventory/aws_ec2.py", line 534, in _populate self._add_hosts(hosts=groups[group], group=group, hostnames=hostnames) File "/Users/alancoding/.ansible/collections/ansible_collections/amazon/aws/plugins/inventory/aws_ec2.py", line 569, in _add_hosts self._add_host_to_keyed_groups(self.get_option('keyed_groups'), host, hostname, strict=strict) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/inventory/__init__.py", line 431, in _add_host_to_keyed_groups raise AnsibleParserError("Invalid group name format, expected a string or a list of them or dictionary, got: %s" % type(key)) ``` No matter what I do, it doesn't seem to use this as a string. Presumably, this may be something that can be reproduced with the constructed inventory plugin by itself, but I have not gotten that far yet. So something is forcing this into an integer, and it wasn't there is 2.9. That's about the extent of what I know at this point.
https://github.com/ansible/ansible/issues/70831
https://github.com/ansible/ansible/pull/70988
7195788ffe8ad63c8d9e36f2bc896c96ddfdaa49
b66d66027ece03f3f0a3fdb5fd6b8213965a2f1d
2020-07-23T03:01:56Z
python
2020-08-11T08:19:49Z
test/integration/targets/jinja2_native_types/test_casting.yml
- name: cast things to other things set_fact: int_to_str: "'{{ i_two }}'" str_to_int: "{{ s_two|int }}" dict_to_str: "'{{ dict_one }}'" list_to_str: "'{{ list_one }}'" int_to_bool: "{{ i_one|bool }}" str_true_to_bool: "{{ s_true|bool }}" str_false_to_bool: "{{ s_false|bool }}" - assert: that: - 'int_to_str == "2"' - 'int_to_str|type_debug in ["str", "unicode"]' - 'str_to_int == 2' - 'str_to_int|type_debug == "int"' - 'dict_to_str|type_debug in ["str", "unicode"]' - 'list_to_str|type_debug in ["str", "unicode"]' - 'int_to_bool is sameas true' - 'int_to_bool|type_debug == "bool"' - 'str_true_to_bool is sameas true' - 'str_true_to_bool|type_debug == "bool"' - 'str_false_to_bool is sameas false' - 'str_false_to_bool|type_debug == "bool"'
closed
ansible/ansible
https://github.com/ansible/ansible
70,831
keyed_groups with native types recasts integer strings back to integers, errors
##### SUMMARY Example here is AWS account ID, which is clearly an integer-like thing. The keyed_group templating logic will error if it gets a type other than a string... but sometime after Ansible 2.9, it seems that I _can't_ cast it to a string ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lib/ansible/plugins/inventory/__init__.py ##### ANSIBLE VERSION ``` $ ansible --version [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point. ansible 2.11.0.dev0 config file = None configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible ansible collection location = /Users/alancoding/.ansible/collections:/usr/share/ansible/collections executable location = /Users/alancoding/.virtualenvs/awx_collection/bin/ansible python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] ``` ##### CONFIGURATION Defaults ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE Inventory file `aws_ec2.yml` ```yaml compose: ec2_account_id: owner_id keyed_groups: - key: ec2_account_id | string parent_group: accounts prefix: '' separator: '' plugin: amazon.aws.aws_ec2 ``` Command: ``` AWS_ACCESS_KEY_ID=<redacted> AWS_SECRET_ACCESS_KEY=<redacted> ANSIBLE_JINJA2_NATIVE=True ansible-inventory -i testing/awx_424_0bzcwopc/aws_ec2.yml --list --export -vvv ``` ##### EXPECTED RESULTS Behavior in 2.9 is that it gives JSON data for the inventory. I think it also needs to be stated that input variables look like this: ``` "ec2_account_id": "123456789", "owner_id": "123456789", ``` ##### ACTUAL RESULTS ``` [WARNING]: * Failed to parse /Users/alancoding/Documents/tower/testing/awx_424_0bzcwopc/aws_ec2.yml with auto plugin: Invalid group name format, expected a string or a list of them or dictionary, got: <class 'int'> File "/Users/alancoding/Documents/repos/ansible/lib/ansible/inventory/manager.py", line 289, in parse_source plugin.parse(self._inventory, self._loader, source, cache=cache) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/inventory/auto.py", line 58, in parse plugin.parse(inventory, loader, path, cache=cache) File "/Users/alancoding/.ansible/collections/ansible_collections/amazon/aws/plugins/inventory/aws_ec2.py", line 644, in parse self._populate(results, hostnames) File "/Users/alancoding/.ansible/collections/ansible_collections/amazon/aws/plugins/inventory/aws_ec2.py", line 534, in _populate self._add_hosts(hosts=groups[group], group=group, hostnames=hostnames) File "/Users/alancoding/.ansible/collections/ansible_collections/amazon/aws/plugins/inventory/aws_ec2.py", line 569, in _add_hosts self._add_host_to_keyed_groups(self.get_option('keyed_groups'), host, hostname, strict=strict) File "/Users/alancoding/Documents/repos/ansible/lib/ansible/plugins/inventory/__init__.py", line 431, in _add_host_to_keyed_groups raise AnsibleParserError("Invalid group name format, expected a string or a list of them or dictionary, got: %s" % type(key)) ``` No matter what I do, it doesn't seem to use this as a string. Presumably, this may be something that can be reproduced with the constructed inventory plugin by itself, but I have not gotten that far yet. So something is forcing this into an integer, and it wasn't there is 2.9. That's about the extent of what I know at this point.
https://github.com/ansible/ansible/issues/70831
https://github.com/ansible/ansible/pull/70988
7195788ffe8ad63c8d9e36f2bc896c96ddfdaa49
b66d66027ece03f3f0a3fdb5fd6b8213965a2f1d
2020-07-23T03:01:56Z
python
2020-08-11T08:19:49Z
test/integration/targets/jinja2_native_types/test_dunder.yml
- name: test variable dunder set_fact: var_dunder: "{{ b_true.__class__ }}" - assert: that: - 'var_dunder|type_debug == "type"' - name: test constant dunder set_fact: const_dunder: "{{ true.__class__ }}" - assert: that: - 'const_dunder|type_debug == "type"' - name: test constant dunder to string set_fact: const_dunder: "{{ true.__class__|string }}" - assert: that: - 'const_dunder|type_debug in ["str", "unicode"]'
closed
ansible/ansible
https://github.com/ansible/ansible
69,364
Allow digital signature verification in get_url
### SUMMARY The `get_url` module does file integrity checking and allows passing a SHASUM text file on a remote host over HTTP. Typically, binaries and their hashes are stored together so while the integrity control works there is no verification that the binary and its corresponding SHASUM file has not be altered by a malicious actor. Sometimes software providers will include a digital signature for cryptographically verifying the SHASUM file. This is a feature request to support verifying digital signatures. ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME get_url ##### ADDITIONAL INFORMATION Consider the software package Vault by HashiCorp. It's a single binary in a zip archive precompiled for practically every common architecture and operating system. It's used to protect secrets, so there is value in a threat actor modifying the code to install backdoors or other code to compromise or disclose those secrets. HashiCorp is aware of this so they have a GPG key pair where the private key they keep secret and the public key they publish on their website and in all major GPG public key servers. Whenever they create a new release, they SHA256 all of the binaries, put them in a SHA256SUMS file, and sign it with their private key. I would imagine providing an attribute for `get_url` to specify the public key ID as well as the path to the digital signature. For example: ```yaml - name: Download vault get_url: url: https://releases.hashicorp.com/vault/1.4.1/vault_1.4.1_linux_arm64.zip dest: vault_1.4.1_linux_arm64.zip checksum: sha256:https://releases.hashicorp.com/vault/1.4.1/vault_1.4.1_SHA256SUMS checksum_sig: 51852D87348FFC4C:https://releases.hashicorp.com/vault/1.4.1/vault_1.4.1_SHA256SUMS.sig ``` Right now I have to download the SHA256SUMS file to verify the signature, and then I can't pass that file to the get_url checksum parameter because it only accepts the raw checksum or a URL; it would also be nice to pass a local path to this field and the hypothesized checksum_sig field. These are the steps I am using to ensure the integrity and authenticity of Vault: ```shell # Import HashiCorp's Public Key from the OpenGPG Public Key Server $ gpg --keyserver keys.openpgp.org --recv-keys 51852D87348FFC4C gpg: directory '/root/.gnupg' created gpg: keybox '/root/.gnupg/pubring.kbx' created gpg: /root/.gnupg/trustdb.gpg: trustdb created gpg: key 51852D87348FFC4C: public key "HashiCorp Security <[email protected]>" imported gpg: Total number processed: 1 gpg: imported: 1 # Download all files $ curl -Os https://releases.hashicorp.com/vault/1.4.1/vault_1.4.1_SHA256SUMS $ curl -Os https://releases.hashicorp.com/vault/1.4.1/vault_1.4.1_SHA256SUMS.sig $ curl -Os https://releases.hashicorp.com/vault/1.4.1/vault_1.4.1_linux_arm64.zip # Verify signature $ gpg --verify vault_1.4.1_SHA256SUMS.sig vault_1.4.1_SHA256SUMS gpg: Signature made Thu Apr 30 08:20:29 2020 UTC gpg: using RSA key 51852D87348FFC4C gpg: Good signature from "HashiCorp Security <[email protected]>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 91A6 E7F8 5D05 C656 30BE F189 5185 2D87 348F FC4C # Verify binary hash $ shasum -a 256 -c vault_1.4.1_SHA256SUMS # ... vault_1.4.1_linux_arm64.zip: OK # ... ```
https://github.com/ansible/ansible/issues/69364
https://github.com/ansible/ansible/pull/71205
a1a50bb3cd0c2d6f2f4cb260a43553c23e806d8a
eb8b3a8479ec82ad622f86ac46f3e9cc083952b8
2020-05-07T03:37:43Z
python
2020-08-17T16:21:15Z
changelogs/fragments/71205_get_url_allow_checksum_file_url.yml
closed
ansible/ansible
https://github.com/ansible/ansible
69,364
Allow digital signature verification in get_url
### SUMMARY The `get_url` module does file integrity checking and allows passing a SHASUM text file on a remote host over HTTP. Typically, binaries and their hashes are stored together so while the integrity control works there is no verification that the binary and its corresponding SHASUM file has not be altered by a malicious actor. Sometimes software providers will include a digital signature for cryptographically verifying the SHASUM file. This is a feature request to support verifying digital signatures. ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME get_url ##### ADDITIONAL INFORMATION Consider the software package Vault by HashiCorp. It's a single binary in a zip archive precompiled for practically every common architecture and operating system. It's used to protect secrets, so there is value in a threat actor modifying the code to install backdoors or other code to compromise or disclose those secrets. HashiCorp is aware of this so they have a GPG key pair where the private key they keep secret and the public key they publish on their website and in all major GPG public key servers. Whenever they create a new release, they SHA256 all of the binaries, put them in a SHA256SUMS file, and sign it with their private key. I would imagine providing an attribute for `get_url` to specify the public key ID as well as the path to the digital signature. For example: ```yaml - name: Download vault get_url: url: https://releases.hashicorp.com/vault/1.4.1/vault_1.4.1_linux_arm64.zip dest: vault_1.4.1_linux_arm64.zip checksum: sha256:https://releases.hashicorp.com/vault/1.4.1/vault_1.4.1_SHA256SUMS checksum_sig: 51852D87348FFC4C:https://releases.hashicorp.com/vault/1.4.1/vault_1.4.1_SHA256SUMS.sig ``` Right now I have to download the SHA256SUMS file to verify the signature, and then I can't pass that file to the get_url checksum parameter because it only accepts the raw checksum or a URL; it would also be nice to pass a local path to this field and the hypothesized checksum_sig field. These are the steps I am using to ensure the integrity and authenticity of Vault: ```shell # Import HashiCorp's Public Key from the OpenGPG Public Key Server $ gpg --keyserver keys.openpgp.org --recv-keys 51852D87348FFC4C gpg: directory '/root/.gnupg' created gpg: keybox '/root/.gnupg/pubring.kbx' created gpg: /root/.gnupg/trustdb.gpg: trustdb created gpg: key 51852D87348FFC4C: public key "HashiCorp Security <[email protected]>" imported gpg: Total number processed: 1 gpg: imported: 1 # Download all files $ curl -Os https://releases.hashicorp.com/vault/1.4.1/vault_1.4.1_SHA256SUMS $ curl -Os https://releases.hashicorp.com/vault/1.4.1/vault_1.4.1_SHA256SUMS.sig $ curl -Os https://releases.hashicorp.com/vault/1.4.1/vault_1.4.1_linux_arm64.zip # Verify signature $ gpg --verify vault_1.4.1_SHA256SUMS.sig vault_1.4.1_SHA256SUMS gpg: Signature made Thu Apr 30 08:20:29 2020 UTC gpg: using RSA key 51852D87348FFC4C gpg: Good signature from "HashiCorp Security <[email protected]>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 91A6 E7F8 5D05 C656 30BE F189 5185 2D87 348F FC4C # Verify binary hash $ shasum -a 256 -c vault_1.4.1_SHA256SUMS # ... vault_1.4.1_linux_arm64.zip: OK # ... ```
https://github.com/ansible/ansible/issues/69364
https://github.com/ansible/ansible/pull/71205
a1a50bb3cd0c2d6f2f4cb260a43553c23e806d8a
eb8b3a8479ec82ad622f86ac46f3e9cc083952b8
2020-05-07T03:37:43Z
python
2020-08-17T16:21:15Z
lib/ansible/modules/get_url.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright: (c) 2012, Jan-Piet Mens <jpmens () gmail.com> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type DOCUMENTATION = r''' --- module: get_url short_description: Downloads files from HTTP, HTTPS, or FTP to node description: - Downloads files from HTTP, HTTPS, or FTP to the remote server. The remote server I(must) have direct access to the remote resource. - By default, if an environment variable C(<protocol>_proxy) is set on the target host, requests will be sent through that proxy. This behaviour can be overridden by setting a variable for this task (see `setting the environment <https://docs.ansible.com/playbooks_environment.html>`_), or by using the use_proxy option. - HTTP redirects can redirect from HTTP to HTTPS so you should be sure that your proxy environment for both protocols is correct. - From Ansible 2.4 when run with C(--check), it will do a HEAD request to validate the URL but will not download the entire file or verify it against hashes. - For Windows targets, use the M(ansible.windows.win_get_url) module instead. version_added: '0.6' options: url: description: - HTTP, HTTPS, or FTP URL in the form (http|https|ftp)://[user[:pass]]@host.domain[:port]/path type: str required: true dest: description: - Absolute path of where to download the file to. - If C(dest) is a directory, either the server provided filename or, if none provided, the base name of the URL on the remote server will be used. If a directory, C(force) has no effect. - If C(dest) is a directory, the file will always be downloaded (regardless of the C(force) option), but replaced only if the contents changed.. type: path required: true tmp_dest: description: - Absolute path of where temporary file is downloaded to. - When run on Ansible 2.5 or greater, path defaults to ansible's remote_tmp setting - When run on Ansible prior to 2.5, it defaults to C(TMPDIR), C(TEMP) or C(TMP) env variables or a platform specific value. - U(https://docs.python.org/2/library/tempfile.html#tempfile.tempdir) type: path version_added: '2.1' force: description: - If C(yes) and C(dest) is not a directory, will download the file every time and replace the file if the contents change. If C(no), the file will only be downloaded if the destination does not exist. Generally should be C(yes) only for small local files. - Prior to 0.6, this module behaved as if C(yes) was the default. - Alias C(thirsty) has been deprecated and will be removed in 2.13. type: bool default: no aliases: [ thirsty ] version_added: '0.7' backup: description: - Create a backup file including the timestamp information so you can get the original file back if you somehow clobbered it incorrectly. type: bool default: no version_added: '2.1' sha256sum: description: - If a SHA-256 checksum is passed to this parameter, the digest of the destination file will be calculated after it is downloaded to ensure its integrity and verify that the transfer completed successfully. This option is deprecated and will be removed in version 2.14. Use option C(checksum) instead. default: '' version_added: "1.3" checksum: description: - 'If a checksum is passed to this parameter, the digest of the destination file will be calculated after it is downloaded to ensure its integrity and verify that the transfer completed successfully. Format: <algorithm>:<checksum|url>, e.g. checksum="sha256:D98291AC[...]B6DC7B97", checksum="sha256:http://example.com/path/sha256sum.txt"' - If you worry about portability, only the sha1 algorithm is available on all platforms and python versions. - The third party hashlib library can be installed for access to additional algorithms. - Additionally, if a checksum is passed to this parameter, and the file exist under the C(dest) location, the I(destination_checksum) would be calculated, and if checksum equals I(destination_checksum), the file download would be skipped (unless C(force) is true). If the checksum does not equal I(destination_checksum), the destination file is deleted. type: str default: '' version_added: "2.0" use_proxy: description: - if C(no), it will not use a proxy, even if one is defined in an environment variable on the target hosts. type: bool default: yes validate_certs: description: - If C(no), SSL certificates will not be validated. - This should only be used on personally controlled sites using self-signed certificates. type: bool default: yes timeout: description: - Timeout in seconds for URL request. type: int default: 10 version_added: '1.8' headers: description: - Add custom HTTP headers to a request in hash/dict format. - The hash/dict format was added in Ansible 2.6. - Previous versions used a C("key:value,key:value") string format. - The C("key:value,key:value") string format is deprecated and has been removed in version 2.10. type: dict version_added: '2.0' url_username: description: - The username for use in HTTP basic authentication. - This parameter can be used without C(url_password) for sites that allow empty passwords. - Since version 2.8 you can also use the C(username) alias for this option. type: str aliases: ['username'] version_added: '1.6' url_password: description: - The password for use in HTTP basic authentication. - If the C(url_username) parameter is not specified, the C(url_password) parameter will not be used. - Since version 2.8 you can also use the 'password' alias for this option. type: str aliases: ['password'] version_added: '1.6' force_basic_auth: description: - Force the sending of the Basic authentication header upon initial request. - httplib2, the library used by the uri module only sends authentication information when a webservice responds to an initial request with a 401 status. Since some basic auth services do not properly send a 401, logins will fail. type: bool default: no version_added: '2.0' client_cert: description: - PEM formatted certificate chain file to be used for SSL client authentication. - This file can also include the key as well, and if the key is included, C(client_key) is not required. type: path version_added: '2.4' client_key: description: - PEM formatted file that contains your private key to be used for SSL client authentication. - If C(client_cert) contains both the certificate and key, this option is not required. type: path version_added: '2.4' http_agent: description: - Header to identify as, generally appears in web server logs. type: str default: ansible-httpget # informational: requirements for nodes extends_documentation_fragment: - files notes: - For Windows targets, use the M(ansible.windows.win_get_url) module instead. seealso: - module: ansible.builtin.uri - module: ansible.windows.win_get_url author: - Jan-Piet Mens (@jpmens) ''' EXAMPLES = r''' - name: Download foo.conf get_url: url: http://example.com/path/file.conf dest: /etc/foo.conf mode: '0440' - name: Download file and force basic auth get_url: url: http://example.com/path/file.conf dest: /etc/foo.conf force_basic_auth: yes - name: Download file with custom HTTP headers get_url: url: http://example.com/path/file.conf dest: /etc/foo.conf headers: key1: one key2: two - name: Download file with check (sha256) get_url: url: http://example.com/path/file.conf dest: /etc/foo.conf checksum: sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c - name: Download file with check (md5) get_url: url: http://example.com/path/file.conf dest: /etc/foo.conf checksum: md5:66dffb5228a211e61d6d7ef4a86f5758 - name: Download file with checksum url (sha256) get_url: url: http://example.com/path/file.conf dest: /etc/foo.conf checksum: sha256:http://example.com/path/sha256sum.txt - name: Download file from a file path get_url: url: file:///tmp/afile.txt dest: /tmp/afilecopy.txt - name: < Fetch file that requires authentication. username/password only available since 2.8, in older versions you need to use url_username/url_password get_url: url: http://example.com/path/file.conf dest: /etc/foo.conf username: bar password: '{{ mysecret }}' ''' RETURN = r''' backup_file: description: name of backup file created after download returned: changed and if backup=yes type: str sample: /path/to/file.txt.2015-02-12@22:09~ checksum_dest: description: sha1 checksum of the file after copy returned: success type: str sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827 checksum_src: description: sha1 checksum of the file returned: success type: str sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827 dest: description: destination file/path returned: success type: str sample: /path/to/file.txt elapsed: description: The number of seconds that elapsed while performing the download returned: always type: int sample: 23 gid: description: group id of the file returned: success type: int sample: 100 group: description: group of the file returned: success type: str sample: "httpd" md5sum: description: md5 checksum of the file after download returned: when supported type: str sample: "2a5aeecc61dc98c4d780b14b330e3282" mode: description: permissions of the target returned: success type: str sample: "0644" msg: description: the HTTP message from the request returned: always type: str sample: OK (unknown bytes) owner: description: owner of the file returned: success type: str sample: httpd secontext: description: the SELinux security context of the file returned: success type: str sample: unconfined_u:object_r:user_tmp_t:s0 size: description: size of the target returned: success type: int sample: 1220 src: description: source file used after download returned: always type: str sample: /tmp/tmpAdFLdV state: description: state of the target returned: success type: str sample: file status_code: description: the HTTP status code from the request returned: always type: int sample: 200 uid: description: owner id of the file, after execution returned: success type: int sample: 100 url: description: the actual URL used for the request returned: always type: str sample: https://www.ansible.com/ ''' import datetime import os import re import shutil import tempfile import traceback from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.six.moves.urllib.parse import urlsplit from ansible.module_utils._text import to_native from ansible.module_utils.urls import fetch_url, url_argument_spec # ============================================================== # url handling def url_filename(url): fn = os.path.basename(urlsplit(url)[2]) if fn == '': return 'index.html' return fn def url_get(module, url, dest, use_proxy, last_mod_time, force, timeout=10, headers=None, tmp_dest=''): """ Download data from the url and store in a temporary file. Return (tempfile, info about the request) """ if module.check_mode: method = 'HEAD' else: method = 'GET' start = datetime.datetime.utcnow() rsp, info = fetch_url(module, url, use_proxy=use_proxy, force=force, last_mod_time=last_mod_time, timeout=timeout, headers=headers, method=method) elapsed = (datetime.datetime.utcnow() - start).seconds if info['status'] == 304: module.exit_json(url=url, dest=dest, changed=False, msg=info.get('msg', ''), status_code=info['status'], elapsed=elapsed) # Exceptions in fetch_url may result in a status -1, the ensures a proper error to the user in all cases if info['status'] == -1: module.fail_json(msg=info['msg'], url=url, dest=dest, elapsed=elapsed) if info['status'] != 200 and not url.startswith('file:/') and not (url.startswith('ftp:/') and info.get('msg', '').startswith('OK')): module.fail_json(msg="Request failed", status_code=info['status'], response=info['msg'], url=url, dest=dest, elapsed=elapsed) # create a temporary file and copy content to do checksum-based replacement if tmp_dest: # tmp_dest should be an existing dir tmp_dest_is_dir = os.path.isdir(tmp_dest) if not tmp_dest_is_dir: if os.path.exists(tmp_dest): module.fail_json(msg="%s is a file but should be a directory." % tmp_dest, elapsed=elapsed) else: module.fail_json(msg="%s directory does not exist." % tmp_dest, elapsed=elapsed) else: tmp_dest = module.tmpdir fd, tempname = tempfile.mkstemp(dir=tmp_dest) f = os.fdopen(fd, 'wb') try: shutil.copyfileobj(rsp, f) except Exception as e: os.remove(tempname) module.fail_json(msg="failed to create temporary content file: %s" % to_native(e), elapsed=elapsed, exception=traceback.format_exc()) f.close() rsp.close() return tempname, info def extract_filename_from_headers(headers): """ Extracts a filename from the given dict of HTTP headers. Looks for the content-disposition header and applies a regex. Returns the filename if successful, else None.""" cont_disp_regex = 'attachment; ?filename="?([^"]+)' res = None if 'content-disposition' in headers: cont_disp = headers['content-disposition'] match = re.match(cont_disp_regex, cont_disp) if match: res = match.group(1) # Try preventing any funny business. res = os.path.basename(res) return res # ============================================================== # main def main(): argument_spec = url_argument_spec() # setup aliases argument_spec['url_username']['aliases'] = ['username'] argument_spec['url_password']['aliases'] = ['password'] argument_spec.update( url=dict(type='str', required=True), dest=dict(type='path', required=True), backup=dict(type='bool'), sha256sum=dict(type='str', default=''), checksum=dict(type='str', default=''), timeout=dict(type='int', default=10), headers=dict(type='dict'), tmp_dest=dict(type='path'), ) module = AnsibleModule( # not checking because of daisy chain to file module argument_spec=argument_spec, add_file_common_args=True, supports_check_mode=True, mutually_exclusive=[['checksum', 'sha256sum']], ) if module.params.get('thirsty'): module.deprecate('The alias "thirsty" has been deprecated and will be removed, use "force" instead', version='2.13', collection_name='ansible.builtin') if module.params.get('sha256sum'): module.deprecate('The parameter "sha256sum" has been deprecated and will be removed, use "checksum" instead', version='2.14', collection_name='ansible.builtin') url = module.params['url'] dest = module.params['dest'] backup = module.params['backup'] force = module.params['force'] sha256sum = module.params['sha256sum'] checksum = module.params['checksum'] use_proxy = module.params['use_proxy'] timeout = module.params['timeout'] headers = module.params['headers'] tmp_dest = module.params['tmp_dest'] result = dict( changed=False, checksum_dest=None, checksum_src=None, dest=dest, elapsed=0, url=url, ) dest_is_dir = os.path.isdir(dest) last_mod_time = None # workaround for usage of deprecated sha256sum parameter if sha256sum: checksum = 'sha256:%s' % (sha256sum) # checksum specified, parse for algorithm and checksum if checksum: try: algorithm, checksum = checksum.split(':', 1) except ValueError: module.fail_json(msg="The checksum parameter has to be in format <algorithm>:<checksum>", **result) if checksum.startswith('http://') or checksum.startswith('https://') or checksum.startswith('ftp://'): checksum_url = checksum # download checksum file to checksum_tmpsrc checksum_tmpsrc, checksum_info = url_get(module, checksum_url, dest, use_proxy, last_mod_time, force, timeout, headers, tmp_dest) with open(checksum_tmpsrc) as f: lines = [line.rstrip('\n') for line in f] os.remove(checksum_tmpsrc) checksum_map = {} for line in lines: parts = line.split(None, 1) if len(parts) == 2: checksum_map[parts[0]] = parts[1] filename = url_filename(url) # Look through each line in the checksum file for a hash corresponding to # the filename in the url, returning the first hash that is found. for cksum in (s for (s, f) in checksum_map.items() if f.strip('./') == filename): checksum = cksum break else: checksum = None if checksum is None: module.fail_json(msg="Unable to find a checksum for file '%s' in '%s'" % (filename, checksum_url)) # Remove any non-alphanumeric characters, including the infamous # Unicode zero-width space checksum = re.sub(r'\W+', '', checksum).lower() # Ensure the checksum portion is a hexdigest try: int(checksum, 16) except ValueError: module.fail_json(msg='The checksum format is invalid', **result) if not dest_is_dir and os.path.exists(dest): checksum_mismatch = False # If the download is not forced and there is a checksum, allow # checksum match to skip the download. if not force and checksum != '': destination_checksum = module.digest_from_file(dest, algorithm) if checksum != destination_checksum: checksum_mismatch = True # Not forcing redownload, unless checksum does not match if not force and checksum and not checksum_mismatch: # Not forcing redownload, unless checksum does not match # allow file attribute changes file_args = module.load_file_common_arguments(module.params, path=dest) result['changed'] = module.set_fs_attributes_if_different(file_args, False) if result['changed']: module.exit_json(msg="file already exists but file attributes changed", **result) module.exit_json(msg="file already exists", **result) # If the file already exists, prepare the last modified time for the # request. mtime = os.path.getmtime(dest) last_mod_time = datetime.datetime.utcfromtimestamp(mtime) # If the checksum does not match we have to force the download # because last_mod_time may be newer than on remote if checksum_mismatch: force = True # download to tmpsrc start = datetime.datetime.utcnow() tmpsrc, info = url_get(module, url, dest, use_proxy, last_mod_time, force, timeout, headers, tmp_dest) result['elapsed'] = (datetime.datetime.utcnow() - start).seconds result['src'] = tmpsrc # Now the request has completed, we can finally generate the final # destination file name from the info dict. if dest_is_dir: filename = extract_filename_from_headers(info) if not filename: # Fall back to extracting the filename from the URL. # Pluck the URL from the info, since a redirect could have changed # it. filename = url_filename(info['url']) dest = os.path.join(dest, filename) result['dest'] = dest # raise an error if there is no tmpsrc file if not os.path.exists(tmpsrc): os.remove(tmpsrc) module.fail_json(msg="Request failed", status_code=info['status'], response=info['msg'], **result) if not os.access(tmpsrc, os.R_OK): os.remove(tmpsrc) module.fail_json(msg="Source %s is not readable" % (tmpsrc), **result) result['checksum_src'] = module.sha1(tmpsrc) # check if there is no dest file if os.path.exists(dest): # raise an error if copy has no permission on dest if not os.access(dest, os.W_OK): os.remove(tmpsrc) module.fail_json(msg="Destination %s is not writable" % (dest), **result) if not os.access(dest, os.R_OK): os.remove(tmpsrc) module.fail_json(msg="Destination %s is not readable" % (dest), **result) result['checksum_dest'] = module.sha1(dest) else: if not os.path.exists(os.path.dirname(dest)): os.remove(tmpsrc) module.fail_json(msg="Destination %s does not exist" % (os.path.dirname(dest)), **result) if not os.access(os.path.dirname(dest), os.W_OK): os.remove(tmpsrc) module.fail_json(msg="Destination %s is not writable" % (os.path.dirname(dest)), **result) if module.check_mode: if os.path.exists(tmpsrc): os.remove(tmpsrc) result['changed'] = ('checksum_dest' not in result or result['checksum_src'] != result['checksum_dest']) module.exit_json(msg=info.get('msg', ''), **result) backup_file = None if result['checksum_src'] != result['checksum_dest']: try: if backup: if os.path.exists(dest): backup_file = module.backup_local(dest) module.atomic_move(tmpsrc, dest) except Exception as e: if os.path.exists(tmpsrc): os.remove(tmpsrc) module.fail_json(msg="failed to copy %s to %s: %s" % (tmpsrc, dest, to_native(e)), exception=traceback.format_exc(), **result) result['changed'] = True else: result['changed'] = False if os.path.exists(tmpsrc): os.remove(tmpsrc) if checksum != '': destination_checksum = module.digest_from_file(dest, algorithm) if checksum != destination_checksum: os.remove(dest) module.fail_json(msg="The checksum for %s did not match %s; it was %s." % (dest, checksum, destination_checksum), **result) # allow file attribute changes file_args = module.load_file_common_arguments(module.params, path=dest) result['changed'] = module.set_fs_attributes_if_different(file_args, result['changed']) # Backwards compat only. We'll return None on FIPS enabled systems try: result['md5sum'] = module.md5(dest) except ValueError: result['md5sum'] = None if backup_file: result['backup_file'] = backup_file # Mission complete module.exit_json(msg=info.get('msg', ''), status_code=info.get('status', ''), **result) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
69,364
Allow digital signature verification in get_url
### SUMMARY The `get_url` module does file integrity checking and allows passing a SHASUM text file on a remote host over HTTP. Typically, binaries and their hashes are stored together so while the integrity control works there is no verification that the binary and its corresponding SHASUM file has not be altered by a malicious actor. Sometimes software providers will include a digital signature for cryptographically verifying the SHASUM file. This is a feature request to support verifying digital signatures. ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME get_url ##### ADDITIONAL INFORMATION Consider the software package Vault by HashiCorp. It's a single binary in a zip archive precompiled for practically every common architecture and operating system. It's used to protect secrets, so there is value in a threat actor modifying the code to install backdoors or other code to compromise or disclose those secrets. HashiCorp is aware of this so they have a GPG key pair where the private key they keep secret and the public key they publish on their website and in all major GPG public key servers. Whenever they create a new release, they SHA256 all of the binaries, put them in a SHA256SUMS file, and sign it with their private key. I would imagine providing an attribute for `get_url` to specify the public key ID as well as the path to the digital signature. For example: ```yaml - name: Download vault get_url: url: https://releases.hashicorp.com/vault/1.4.1/vault_1.4.1_linux_arm64.zip dest: vault_1.4.1_linux_arm64.zip checksum: sha256:https://releases.hashicorp.com/vault/1.4.1/vault_1.4.1_SHA256SUMS checksum_sig: 51852D87348FFC4C:https://releases.hashicorp.com/vault/1.4.1/vault_1.4.1_SHA256SUMS.sig ``` Right now I have to download the SHA256SUMS file to verify the signature, and then I can't pass that file to the get_url checksum parameter because it only accepts the raw checksum or a URL; it would also be nice to pass a local path to this field and the hypothesized checksum_sig field. These are the steps I am using to ensure the integrity and authenticity of Vault: ```shell # Import HashiCorp's Public Key from the OpenGPG Public Key Server $ gpg --keyserver keys.openpgp.org --recv-keys 51852D87348FFC4C gpg: directory '/root/.gnupg' created gpg: keybox '/root/.gnupg/pubring.kbx' created gpg: /root/.gnupg/trustdb.gpg: trustdb created gpg: key 51852D87348FFC4C: public key "HashiCorp Security <[email protected]>" imported gpg: Total number processed: 1 gpg: imported: 1 # Download all files $ curl -Os https://releases.hashicorp.com/vault/1.4.1/vault_1.4.1_SHA256SUMS $ curl -Os https://releases.hashicorp.com/vault/1.4.1/vault_1.4.1_SHA256SUMS.sig $ curl -Os https://releases.hashicorp.com/vault/1.4.1/vault_1.4.1_linux_arm64.zip # Verify signature $ gpg --verify vault_1.4.1_SHA256SUMS.sig vault_1.4.1_SHA256SUMS gpg: Signature made Thu Apr 30 08:20:29 2020 UTC gpg: using RSA key 51852D87348FFC4C gpg: Good signature from "HashiCorp Security <[email protected]>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 91A6 E7F8 5D05 C656 30BE F189 5185 2D87 348F FC4C # Verify binary hash $ shasum -a 256 -c vault_1.4.1_SHA256SUMS # ... vault_1.4.1_linux_arm64.zip: OK # ... ```
https://github.com/ansible/ansible/issues/69364
https://github.com/ansible/ansible/pull/71205
a1a50bb3cd0c2d6f2f4cb260a43553c23e806d8a
eb8b3a8479ec82ad622f86ac46f3e9cc083952b8
2020-05-07T03:37:43Z
python
2020-08-17T16:21:15Z
test/integration/targets/get_url/tasks/main.yml
# Test code for the get_url module # (c) 2014, Richard Isaacson <[email protected]> # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <https://www.gnu.org/licenses/>. - name: Determine if python looks like it will support modern ssl features like SNI command: "{{ ansible_python.executable }} -c 'from ssl import SSLContext'" ignore_errors: True register: python_test - name: Set python_has_sslcontext if we have it set_fact: python_has_ssl_context: True when: python_test.rc == 0 - name: Set python_has_sslcontext False if we don't have it set_fact: python_has_ssl_context: False when: python_test.rc != 0 - name: Define test files for file schema set_fact: geturl_srcfile: "{{ remote_tmp_dir }}/aurlfile.txt" geturl_dstfile: "{{ remote_tmp_dir }}/aurlfile_copy.txt" - name: Create source file copy: dest: "{{ geturl_srcfile }}" content: "foobar" register: source_file_copied - name: test file fetch get_url: url: "file://{{ source_file_copied.dest }}" dest: "{{ geturl_dstfile }}" register: result - name: assert success and change assert: that: - result is changed - '"OK" in result.msg' - name: test nonexisting file fetch get_url: url: "file://{{ source_file_copied.dest }}NOFILE" dest: "{{ geturl_dstfile }}NOFILE" register: result ignore_errors: True - name: assert success and change assert: that: - result is failed - name: test HTTP HEAD request for file in check mode get_url: url: "https://{{ httpbin_host }}/get" dest: "{{ remote_tmp_dir }}/get_url_check.txt" force: yes check_mode: True register: result - name: assert that the HEAD request was successful in check mode assert: that: - result is changed - '"OK" in result.msg' - name: test HTTP HEAD for nonexistent URL in check mode get_url: url: "https://{{ httpbin_host }}/DOESNOTEXIST" dest: "{{ remote_tmp_dir }}/shouldnotexist.html" force: yes check_mode: True register: result ignore_errors: True - name: assert that HEAD request for nonexistent URL failed assert: that: - result is failed - name: test https fetch get_url: url="https://{{ httpbin_host }}/get" dest={{remote_tmp_dir}}/get_url.txt force=yes register: result - name: assert the get_url call was successful assert: that: - result is changed - '"OK" in result.msg' - name: test https fetch to a site with mismatched hostname and certificate get_url: url: "https://{{ badssl_host }}/" dest: "{{ remote_tmp_dir }}/shouldnotexist.html" ignore_errors: True register: result - stat: path: "{{ remote_tmp_dir }}/shouldnotexist.html" register: stat_result - name: Assert that the file was not downloaded assert: that: - "result is failed" - "'Failed to validate the SSL certificate' in result.msg or 'Hostname mismatch' in result.msg or ( result.msg is match('hostname .* doesn.t match .*'))" - "stat_result.stat.exists == false" - name: test https fetch to a site with mismatched hostname and certificate and validate_certs=no get_url: url: "https://{{ badssl_host }}/" dest: "{{ remote_tmp_dir }}/get_url_no_validate.html" validate_certs: no register: result - stat: path: "{{ remote_tmp_dir }}/get_url_no_validate.html" register: stat_result - name: Assert that the file was downloaded assert: that: - result is changed - "stat_result.stat.exists == true" # SNI Tests # SNI is only built into the stdlib from python-2.7.9 onwards - name: Test that SNI works get_url: url: 'https://{{ sni_host }}/' dest: "{{ remote_tmp_dir }}/sni.html" register: get_url_result ignore_errors: True - command: "grep '{{ sni_host }}' {{ remote_tmp_dir}}/sni.html" register: data_result when: python_has_ssl_context - debug: var: get_url_result - name: Assert that SNI works with this python version assert: that: - 'data_result.rc == 0' when: python_has_ssl_context # If the client doesn't support SNI then get_url should have failed with a certificate mismatch - name: Assert that hostname verification failed because SNI is not supported on this version of python assert: that: - 'get_url_result is failed' when: not python_has_ssl_context # These tests are just side effects of how the site is hosted. It's not # specifically a test site. So the tests may break due to the hosting changing - name: Test that SNI works get_url: url: 'https://{{ sni_host }}/' dest: "{{ remote_tmp_dir }}/sni.html" register: get_url_result ignore_errors: True - command: "grep '{{ sni_host }}' {{ remote_tmp_dir}}/sni.html" register: data_result when: python_has_ssl_context - debug: var: get_url_result - name: Assert that SNI works with this python version assert: that: - 'data_result.rc == 0' - 'get_url_result is not failed' when: python_has_ssl_context # If the client doesn't support SNI then get_url should have failed with a certificate mismatch - name: Assert that hostname verification failed because SNI is not supported on this version of python assert: that: - 'get_url_result is failed' when: not python_has_ssl_context # End hacky SNI test section - name: Test get_url with redirect get_url: url: 'https://{{ httpbin_host }}/redirect/6' dest: "{{ remote_tmp_dir }}/redirect.json" - name: Test that setting file modes work get_url: url: 'https://{{ httpbin_host }}/' dest: '{{ remote_tmp_dir }}/test' mode: '0707' register: result - stat: path: "{{ remote_tmp_dir }}/test" register: stat_result - name: Assert that the file has the right permissions assert: that: - result is changed - "stat_result.stat.mode == '0707'" - name: Test that setting file modes on an already downloaded file work get_url: url: 'https://{{ httpbin_host }}/' dest: '{{ remote_tmp_dir }}/test' mode: '0070' register: result - stat: path: "{{ remote_tmp_dir }}/test" register: stat_result - name: Assert that the file has the right permissions assert: that: - result is changed - "stat_result.stat.mode == '0070'" # https://github.com/ansible/ansible/pull/65307/ - name: Test that on http status 304, we get a status_code field. get_url: url: 'https://{{ httpbin_host }}/status/304' dest: '{{ remote_tmp_dir }}/test' register: result - name: Assert that we get the appropriate status_code assert: that: - "'status_code' in result" - "result.status_code == 304" # https://github.com/ansible/ansible/issues/29614 - name: Change mode on an already downloaded file and specify checksum get_url: url: 'https://{{ httpbin_host }}/get' dest: '{{ remote_tmp_dir }}/test' checksum: 'sha256:7036ede810fad2b5d2e7547ec703cae8da61edbba43c23f9d7203a0239b765c4.' mode: '0775' register: result - stat: path: "{{ remote_tmp_dir }}/test" register: stat_result - name: Assert that file permissions on already downloaded file were changed assert: that: - result is changed - "stat_result.stat.mode == '0775'" - name: test checksum match in check mode get_url: url: 'https://{{ httpbin_host }}/get' dest: '{{ remote_tmp_dir }}/test' checksum: 'sha256:7036ede810fad2b5d2e7547ec703cae8da61edbba43c23f9d7203a0239b765c4.' check_mode: True register: result - name: Assert that check mode was green assert: that: - result is not changed - name: Get a file that already exists with a checksum get_url: url: 'https://{{ httpbin_host }}/cache' dest: '{{ remote_tmp_dir }}/test' checksum: 'sha1:{{ stat_result.stat.checksum }}' register: result - name: Assert that the file was not downloaded assert: that: - result.msg == 'file already exists' - name: Get a file that already exists get_url: url: 'https://{{ httpbin_host }}/cache' dest: '{{ remote_tmp_dir }}/test' register: result - name: Assert that we didn't re-download unnecessarily assert: that: - result is not changed - "'304' in result.msg" - name: get a file that doesn't respond to If-Modified-Since without checksum get_url: url: 'https://{{ httpbin_host }}/get' dest: '{{ remote_tmp_dir }}/test' register: result - name: Assert that we downloaded the file assert: that: - result is changed # https://github.com/ansible/ansible/issues/27617 - name: set role facts set_fact: http_port: 27617 files_dir: '{{ remote_tmp_dir }}/files' - name: create files_dir file: dest: "{{ files_dir }}" state: directory - name: create src file copy: dest: '{{ files_dir }}/27617.txt' content: "ptux" - name: create sha1 checksum file of src copy: dest: '{{ files_dir }}/sha1sum.txt' content: | a97e6837f60cec6da4491bab387296bbcd72bdba 27617.txt 3911340502960ca33aece01129234460bfeb2791 not_target1.txt 1b4b6adf30992cedb0f6edefd6478ff0a593b2e4 not_target2.txt - name: create sha256 checksum file of src copy: dest: '{{ files_dir }}/sha256sum.txt' content: | b1b6ce5073c8fac263a8fc5edfffdbd5dec1980c784e09c5bc69f8fb6056f006. 27617.txt 30949cc401e30ac494d695ab8764a9f76aae17c5d73c67f65e9b558f47eff892 not_target1.txt d0dbfc1945bc83bf6606b770e442035f2c4e15c886ee0c22fb3901ba19900b5b not_target2.txt - name: create sha256 checksum file of src with a dot leading path copy: dest: '{{ files_dir }}/sha256sum_with_dot.txt' content: | b1b6ce5073c8fac263a8fc5edfffdbd5dec1980c784e09c5bc69f8fb6056f006. ./27617.txt 30949cc401e30ac494d695ab8764a9f76aae17c5d73c67f65e9b558f47eff892 ./not_target1.txt d0dbfc1945bc83bf6606b770e442035f2c4e15c886ee0c22fb3901ba19900b5b ./not_target2.txt - copy: src: "testserver.py" dest: "{{ remote_tmp_dir }}/testserver.py" - name: start SimpleHTTPServer for issues 27617 shell: cd {{ files_dir }} && {{ ansible_python.executable }} {{ remote_tmp_dir}}/testserver.py {{ http_port }} async: 90 poll: 0 - name: Wait for SimpleHTTPServer to come up online wait_for: host: 'localhost' port: '{{ http_port }}' state: started - name: download src with sha1 checksum url get_url: url: 'http://localhost:{{ http_port }}/27617.txt' dest: '{{ remote_tmp_dir }}' checksum: 'sha1:http://localhost:{{ http_port }}/sha1sum.txt' register: result_sha1 - stat: path: "{{ remote_tmp_dir }}/27617.txt" register: stat_result_sha1 - name: download src with sha256 checksum url get_url: url: 'http://localhost:{{ http_port }}/27617.txt' dest: '{{ remote_tmp_dir }}/27617sha256.txt' checksum: 'sha256:http://localhost:{{ http_port }}/sha256sum.txt' register: result_sha256 - stat: path: "{{ remote_tmp_dir }}/27617.txt" register: stat_result_sha256 - name: download src with sha256 checksum url with dot leading paths get_url: url: 'http://localhost:{{ http_port }}/27617.txt' dest: '{{ remote_tmp_dir }}/27617sha256_with_dot.txt' checksum: 'sha256:http://localhost:{{ http_port }}/sha256sum_with_dot.txt' register: result_sha256_with_dot - stat: path: "{{ remote_tmp_dir }}/27617sha256_with_dot.txt" register: stat_result_sha256_with_dot - name: Assert that the file was downloaded assert: that: - result_sha1 is changed - result_sha256 is changed - result_sha256_with_dot is changed - "stat_result_sha1.stat.exists == true" - "stat_result_sha256.stat.exists == true" - "stat_result_sha256_with_dot.stat.exists == true" #https://github.com/ansible/ansible/issues/16191 - name: Test url split with no filename get_url: url: https://{{ httpbin_host }} dest: "{{ remote_tmp_dir }}" - name: Test headers dict get_url: url: https://{{ httpbin_host }}/headers headers: Foo: bar Baz: qux dest: "{{ remote_tmp_dir }}/headers_dict.json" - name: Get downloaded file slurp: src: "{{ remote_tmp_dir }}/headers_dict.json" register: result - name: Test headers dict assert: that: - (result.content | b64decode | from_json).headers.get('Foo') == 'bar' - (result.content | b64decode | from_json).headers.get('Baz') == 'qux' - name: Test client cert auth, with certs get_url: url: "https://ansible.http.tests/ssl_client_verify" client_cert: "{{ remote_tmp_dir }}/client.pem" client_key: "{{ remote_tmp_dir }}/client.key" dest: "{{ remote_tmp_dir }}/ssl_client_verify" when: has_httptester - name: Get downloaded file slurp: src: "{{ remote_tmp_dir }}/ssl_client_verify" register: result when: has_httptester - name: Assert that the ssl_client_verify file contains the correct content assert: that: - '(result.content | b64decode) == "ansible.http.tests:SUCCESS"' when: has_httptester
closed
ansible/ansible
https://github.com/ansible/ansible
71,307
toml inventory cannot dump unsafe values
##### SUMMARY Rendering string ``` config: service: !unsafe '{{ SYSLOG_IDENTIFIER }}' ``` with ``` - name: place config copy: content: | {{ config | to_toml }} dest: "{{ config_file }}" ``` leads to ``` service = [ "{", "{", " ", "S", "Y", "S", "L", "O", "G", "_", "I", "D", "E", "N", "T", "I", "F", "I", "E", "R", " ", "}", "}",] ``` instead of ``` service = "{{ SYSLOG_IDENTIFIER }}" ``` I understand that filter to_toml is not part of ansible and get from https://github.com/sivel/toiletwater but in code of it it uses ansible code in import section. https://github.com/sivel/toiletwater/blob/master/plugins/filter/toml.py#L10 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME to_toml ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below % ansible --version ansible 2.9.11 config file = None configured module search path = ['/home/ashirokih/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/ashirokih/python/lib/python3.8/site-packages/ansible executable location = /home/ashirokih/python/bin/ansible python version = 3.8.2 (default, Jul 16 2020, 14:00:26) [GCC 9.3.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below empty ``` ##### OS / ENVIRONMENT not relevant ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> Rendering string ```yaml config: service: !unsafe '{{ SYSLOG_IDENTIFIER }}' ``` with ```yaml - name: place config copy: content: | {{ config | to_toml }} dest: "{{ config_file }}" ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS ``` service = "{{ SYSLOG_IDENTIFIER }}" ``` ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below service = [ "{", "{", " ", "S", "Y", "S", "L", "O", "G", "_", "I", "D", "E", "N", "T", "I", "F", "I", "E", "R", " ", "}", "}",] ``` Probably i should ask @sivel
https://github.com/ansible/ansible/issues/71307
https://github.com/ansible/ansible/pull/71309
959af7d90b34dfe530e27279db214ae2976c1f86
9da880182be89a7fdbea3b5424da501542eba4c9
2020-08-17T14:19:22Z
python
2020-08-17T18:46:13Z
changelogs/fragments/71307-toml-dumps-unsafe.yml
closed
ansible/ansible
https://github.com/ansible/ansible
71,307
toml inventory cannot dump unsafe values
##### SUMMARY Rendering string ``` config: service: !unsafe '{{ SYSLOG_IDENTIFIER }}' ``` with ``` - name: place config copy: content: | {{ config | to_toml }} dest: "{{ config_file }}" ``` leads to ``` service = [ "{", "{", " ", "S", "Y", "S", "L", "O", "G", "_", "I", "D", "E", "N", "T", "I", "F", "I", "E", "R", " ", "}", "}",] ``` instead of ``` service = "{{ SYSLOG_IDENTIFIER }}" ``` I understand that filter to_toml is not part of ansible and get from https://github.com/sivel/toiletwater but in code of it it uses ansible code in import section. https://github.com/sivel/toiletwater/blob/master/plugins/filter/toml.py#L10 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME to_toml ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below % ansible --version ansible 2.9.11 config file = None configured module search path = ['/home/ashirokih/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/ashirokih/python/lib/python3.8/site-packages/ansible executable location = /home/ashirokih/python/bin/ansible python version = 3.8.2 (default, Jul 16 2020, 14:00:26) [GCC 9.3.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below empty ``` ##### OS / ENVIRONMENT not relevant ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> Rendering string ```yaml config: service: !unsafe '{{ SYSLOG_IDENTIFIER }}' ``` with ```yaml - name: place config copy: content: | {{ config | to_toml }} dest: "{{ config_file }}" ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS ``` service = "{{ SYSLOG_IDENTIFIER }}" ``` ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below service = [ "{", "{", " ", "S", "Y", "S", "L", "O", "G", "_", "I", "D", "E", "N", "T", "I", "F", "I", "E", "R", " ", "}", "}",] ``` Probably i should ask @sivel
https://github.com/ansible/ansible/issues/71307
https://github.com/ansible/ansible/pull/71309
959af7d90b34dfe530e27279db214ae2976c1f86
9da880182be89a7fdbea3b5424da501542eba4c9
2020-08-17T14:19:22Z
python
2020-08-17T18:46:13Z
lib/ansible/plugins/inventory/toml.py
# Copyright (c) 2018 Matt Martz <[email protected]> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type DOCUMENTATION = r''' inventory: toml version_added: "2.8" short_description: Uses a specific TOML file as an inventory source. description: - TOML based inventory format - File MUST have a valid '.toml' file extension notes: - Requires the 'toml' python library ''' EXAMPLES = r'''# fmt: toml # Example 1 [all.vars] has_java = false [web] children = [ "apache", "nginx" ] vars = { http_port = 8080, myvar = 23 } [web.hosts] host1 = {} host2 = { ansible_port = 222 } [apache.hosts] tomcat1 = {} tomcat2 = { myvar = 34 } tomcat3 = { mysecret = "03#pa33w0rd" } [nginx.hosts] jenkins1 = {} [nginx.vars] has_java = true # Example 2 [all.vars] has_java = false [web] children = [ "apache", "nginx" ] [web.vars] http_port = 8080 myvar = 23 [web.hosts.host1] [web.hosts.host2] ansible_port = 222 [apache.hosts.tomcat1] [apache.hosts.tomcat2] myvar = 34 [apache.hosts.tomcat3] mysecret = "03#pa33w0rd" [nginx.hosts.jenkins1] [nginx.vars] has_java = true # Example 3 [ungrouped.hosts] host1 = {} host2 = { ansible_host = "127.0.0.1", ansible_port = 44 } host3 = { ansible_host = "127.0.0.1", ansible_port = 45 } [g1.hosts] host4 = {} [g2.hosts] host4 = {} ''' import os from functools import partial from ansible.errors import AnsibleFileNotFound, AnsibleParserError from ansible.module_utils._text import to_bytes, to_native, to_text from ansible.module_utils.common._collections_compat import MutableMapping, MutableSequence from ansible.module_utils.six import string_types, text_type from ansible.parsing.yaml.objects import AnsibleSequence, AnsibleUnicode from ansible.plugins.inventory import BaseFileInventoryPlugin from ansible.utils.display import Display try: import toml HAS_TOML = True except ImportError: HAS_TOML = False display = Display() if HAS_TOML and hasattr(toml, 'TomlEncoder'): class AnsibleTomlEncoder(toml.TomlEncoder): def __init__(self, *args, **kwargs): super(AnsibleTomlEncoder, self).__init__(*args, **kwargs) # Map our custom YAML object types to dump_funcs from ``toml`` self.dump_funcs.update({ AnsibleSequence: self.dump_funcs.get(list), AnsibleUnicode: self.dump_funcs.get(str), }) toml_dumps = partial(toml.dumps, encoder=AnsibleTomlEncoder()) else: def toml_dumps(data): return toml.dumps(convert_yaml_objects_to_native(data)) def convert_yaml_objects_to_native(obj): """Older versions of the ``toml`` python library, don't have a pluggable way to tell the encoder about custom types, so we need to ensure objects that we pass are native types. Only used on ``toml<0.10.0`` where ``toml.TomlEncoder`` is missing. This function recurses an object and ensures we cast any of the types from ``ansible.parsing.yaml.objects`` into their native types, effectively cleansing the data before we hand it over to ``toml`` This function doesn't directly check for the types from ``ansible.parsing.yaml.objects`` but instead checks for the types those objects inherit from, to offer more flexibility. """ if isinstance(obj, dict): return dict((k, convert_yaml_objects_to_native(v)) for k, v in obj.items()) elif isinstance(obj, list): return [convert_yaml_objects_to_native(v) for v in obj] elif isinstance(obj, text_type): return text_type(obj) else: return obj class InventoryModule(BaseFileInventoryPlugin): NAME = 'toml' def _parse_group(self, group, group_data): if not isinstance(group_data, (MutableMapping, type(None))): self.display.warning("Skipping '%s' as this is not a valid group definition" % group) return group = self.inventory.add_group(group) if group_data is None: return for key, data in group_data.items(): if key == 'vars': if not isinstance(data, MutableMapping): raise AnsibleParserError( 'Invalid "vars" entry for "%s" group, requires a dict, found "%s" instead.' % (group, type(data)) ) for var, value in data.items(): self.inventory.set_variable(group, var, value) elif key == 'children': if not isinstance(data, MutableSequence): raise AnsibleParserError( 'Invalid "children" entry for "%s" group, requires a list, found "%s" instead.' % (group, type(data)) ) for subgroup in data: self._parse_group(subgroup, {}) self.inventory.add_child(group, subgroup) elif key == 'hosts': if not isinstance(data, MutableMapping): raise AnsibleParserError( 'Invalid "hosts" entry for "%s" group, requires a dict, found "%s" instead.' % (group, type(data)) ) for host_pattern, value in data.items(): hosts, port = self._expand_hostpattern(host_pattern) self._populate_host_vars(hosts, value, group, port) else: self.display.warning( 'Skipping unexpected key "%s" in group "%s", only "vars", "children" and "hosts" are valid' % (key, group) ) def _load_file(self, file_name): if not file_name or not isinstance(file_name, string_types): raise AnsibleParserError("Invalid filename: '%s'" % to_native(file_name)) b_file_name = to_bytes(self.loader.path_dwim(file_name)) if not self.loader.path_exists(b_file_name): raise AnsibleFileNotFound("Unable to retrieve file contents", file_name=file_name) try: (b_data, private) = self.loader._get_file_contents(file_name) return toml.loads(to_text(b_data, errors='surrogate_or_strict')) except toml.TomlDecodeError as e: raise AnsibleParserError( 'TOML file (%s) is invalid: %s' % (file_name, to_native(e)), orig_exc=e ) except (IOError, OSError) as e: raise AnsibleParserError( "An error occurred while trying to read the file '%s': %s" % (file_name, to_native(e)), orig_exc=e ) except Exception as e: raise AnsibleParserError( "An unexpected error occurred while parsing the file '%s': %s" % (file_name, to_native(e)), orig_exc=e ) def parse(self, inventory, loader, path, cache=True): ''' parses the inventory file ''' if not HAS_TOML: raise AnsibleParserError( 'The TOML inventory plugin requires the python "toml" library' ) super(InventoryModule, self).parse(inventory, loader, path) self.set_options() try: data = self._load_file(path) except Exception as e: raise AnsibleParserError(e) if not data: raise AnsibleParserError('Parsed empty TOML file') elif data.get('plugin'): raise AnsibleParserError('Plugin configuration TOML file, not TOML inventory') for group_name in data: self._parse_group(group_name, data[group_name]) def verify_file(self, path): if super(InventoryModule, self).verify_file(path): file_name, ext = os.path.splitext(path) if ext == '.toml': return True return False
closed
ansible/ansible
https://github.com/ansible/ansible
69,039
Ansible-vault in INI style inventory file?
<!--- Verify first that your improvement is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below, add suggestions to wording or structure --> Hey, this page show INI style hosts file using ansible-vault to encrypt passwords: https://docs.ansible.com/ansible/latest/network/user_guide/network_best_practices_2.5.html But this page says that it is not possible to do it like that: https://docs.ansible.com/ansible/latest/network/getting_started/first_inventory.html "This is an example using an extract from a YAML inventory, as the INI format does not support inline vaults" Which one is correct? <!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? --> ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME <!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure --> ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. OS version, browser, etc. --> ##### ADDITIONAL INFORMATION <!--- Describe how this improves the documentation, e.g. before/after situation or screenshots --> <!--- HINT: You can paste gist.github.com links for larger files -->
https://github.com/ansible/ansible/issues/69039
https://github.com/ansible/ansible/pull/71246
7f97a62d8775b4ea5914550256b3f4c63e7a2b32
a1257d75aa2f874ea2768dd99c4affe8b37a886f
2020-04-20T10:48:07Z
python
2020-08-18T20:34:25Z
docs/docsite/rst/network/user_guide/network_best_practices_2.5.rst
.. _network-best-practices: ************************ Ansible Network Examples ************************ This document describes some examples of using Ansible to manage your network infrastructure. .. contents:: :local: Prerequisites ============= This example requires the following: * **Ansible 2.5** (or higher) installed. See :ref:`intro_installation_guide` for more information. * One or more network devices that are compatible with Ansible. * Basic understanding of YAML :ref:`yaml_syntax`. * Basic understanding of Jinja2 templates. See :ref:`playbooks_templating` for more information. * Basic Linux command line use. * Basic knowledge of network switch & router configurations. Groups and variables in an inventory file ========================================= An ``inventory`` file is a YAML or INI-like configuration file that defines the mapping of hosts into groups. In our example, the inventory file defines the groups ``eos``, ``ios``, ``vyos`` and a "group of groups" called ``switches``. Further details about subgroups and inventory files can be found in the :ref:`Ansible inventory Group documentation <subgroups>`. Because Ansible is a flexible tool, there are a number of ways to specify connection information and credentials. We recommend using the ``[my_group:vars]`` capability in your inventory file. Here's what it would look like if you specified your SSH passwords (encrypted with Ansible Vault) among your variables: .. code-block:: ini [all:vars] # these defaults can be overridden for any group in the [group:vars] section ansible_connection=network_cli ansible_user=ansible [switches:children] eos ios vyos [eos] veos01 ansible_host=veos-01.example.net veos02 ansible_host=veos-02.example.net veos03 ansible_host=veos-03.example.net veos04 ansible_host=veos-04.example.net [eos:vars] ansible_become=yes ansible_become_method=enable ansible_network_os=eos ansible_user=my_eos_user ansible_password= !vault | $ANSIBLE_VAULT;1.1;AES256 37373735393636643261383066383235363664386633386432343236663533343730353361653735 6131363539383931353931653533356337353539373165320a316465383138636532343463633236 37623064393838353962386262643230303438323065356133373930646331623731656163623333 3431353332343530650a373038366364316135383063356531633066343434623631303166626532 9562 [ios] ios01 ansible_host=ios-01.example.net ios02 ansible_host=ios-02.example.net ios03 ansible_host=ios-03.example.net [ios:vars] ansible_become=yes ansible_become_method=enable ansible_network_os=ios ansible_user=my_ios_user ansible_password= !vault | $ANSIBLE_VAULT;1.1;AES256 34623431313336343132373235313066376238386138316466636437653938623965383732373130 3466363834613161386538393463663861636437653866620a373136356366623765373530633735 34323262363835346637346261653137626539343534643962376139366330626135393365353739 3431373064656165320a333834613461613338626161633733343566666630366133623265303563 8472 [vyos] vyos01 ansible_host=vyos-01.example.net vyos02 ansible_host=vyos-02.example.net vyos03 ansible_host=vyos-03.example.net [vyos:vars] ansible_network_os=vyos ansible_user=my_vyos_user ansible_password= !vault | $ANSIBLE_VAULT;1.1;AES256 39336231636137663964343966653162353431333566633762393034646462353062633264303765 6331643066663534383564343537343334633031656538370a333737656236393835383863306466 62633364653238323333633337313163616566383836643030336631333431623631396364663533 3665626431626532630a353564323566316162613432373738333064366130303637616239396438 9853 If you use ssh-agent, you do not need the ``ansible_password`` lines. If you use ssh keys, but not ssh-agent, and you have multiple keys, specify the key to use for each connection in the ``[group:vars]`` section with ``ansible_ssh_private_key_file=/path/to/correct/key``. For more information on ``ansible_ssh_`` options see :ref:`behavioral_parameters`. .. FIXME FUTURE Gundalow - Link to network auth & proxy page (to be written) .. warning:: Never store passwords in plain text. Ansible vault for password encryption ------------------------------------- The "Vault" feature of Ansible allows you to keep sensitive data such as passwords or keys in encrypted files, rather than as plain text in your playbooks or roles. These vault files can then be distributed or placed in source control. See :ref:`playbooks_vault` for more information. Common inventory variables -------------------------- The following variables are common for all platforms in the inventory, though they can be overwritten for a particular inventory group or host. :ansible_connection: Ansible uses the ansible-connection setting to determine how to connect to a remote device. When working with Ansible Networking, set this to ``network_cli`` so Ansible treats the remote node as a network device with a limited execution environment. Without this setting, Ansible would attempt to use ssh to connect to the remote and execute the Python script on the network device, which would fail because Python generally isn't available on network devices. :ansible_network_os: Informs Ansible which Network platform this hosts corresponds to. This is required when using ``network_cli`` or ``netconf``. :ansible_user: The user to connect to the remote device (switch) as. Without this the user that is running ``ansible-playbook`` would be used. Specifies which user on the network device the connection :ansible_password: The corresponding password for ``ansible_user`` to log in as. If not specified SSH key will be used. :ansible_become: If enable mode (privilege mode) should be used, see the next section. :ansible_become_method: Which type of `become` should be used, for ``network_cli`` the only valid choice is ``enable``. Privilege escalation -------------------- Certain network platforms, such as Arista EOS and Cisco IOS, have the concept of different privilege modes. Certain network modules, such as those that modify system state including users, will only work in high privilege states. Ansible supports ``become`` when using ``connection: network_cli``. This allows privileges to be raised for the specific tasks that need them. Adding ``become: yes`` and ``become_method: enable`` informs Ansible to go into privilege mode before executing the task, as shown here: .. code-block:: ini [eos:vars] ansible_connection=network_cli ansible_network_os=eos ansible_become=yes ansible_become_method=enable For more information, see the :ref:`using become with network modules<become_network>` guide. Jump hosts ---------- If the Ansible Controller doesn't have a direct route to the remote device and you need to use a Jump Host, please see the :ref:`Ansible Network Proxy Command <network_delegate_to_vs_ProxyCommand>` guide for details on how to achieve this. Example 1: collecting facts and creating backup files with a playbook ===================================================================== Ansible facts modules gather system information 'facts' that are available to the rest of your playbook. Ansible Networking ships with a number of network-specific facts modules. In this example, we use the ``_facts`` modules :ref:`eos_facts <eos_facts_module>`, :ref:`ios_facts <ios_facts_module>` and :ref:`vyos_facts <vyos_facts_module>` to connect to the remote networking device. As the credentials are not explicitly passed via module arguments, Ansible uses the username and password from the inventory file. Ansible's "Network Fact modules" gather information from the system and store the results in facts prefixed with ``ansible_net_``. The data collected by these modules is documented in the `Return Values` section of the module docs, in this case :ref:`eos_facts <eos_facts_module>` and :ref:`vyos_facts <vyos_facts_module>`. We can use the facts, such as ``ansible_net_version`` late on in the "Display some facts" task. To ensure we call the correct mode (``*_facts``) the task is conditionally run based on the group defined in the inventory file, for more information on the use of conditionals in Ansible Playbooks see :ref:`the_when_statement`. In this example, we will create an inventory file containing some network switches, then run a playbook to connect to the network devices and return some information about them. Step 1: Creating the inventory ------------------------------ First, create a file called ``inventory``, containing: .. code-block:: ini [switches:children] eos ios vyos [eos] eos01.example.net [ios] ios01.example.net [vyos] vyos01.example.net Step 2: Creating the playbook ----------------------------- Next, create a playbook file called ``facts-demo.yml`` containing the following: .. code-block:: yaml - name: "Demonstrate connecting to switches" hosts: switches gather_facts: no tasks: ### # Collect data # - name: Gather facts (eos) eos_facts: when: ansible_network_os == 'eos' - name: Gather facts (ios) ios_facts: when: ansible_network_os == 'ios' - name: Gather facts (vyos) vyos_facts: when: ansible_network_os == 'vyos' ### # Demonstrate variables # - name: Display some facts debug: msg: "The hostname is {{ ansible_net_hostname }} and the OS is {{ ansible_net_version }}" - name: Facts from a specific host debug: var: hostvars['vyos01.example.net'] - name: Write facts to disk using a template copy: content: | #jinja2: lstrip_blocks: True EOS device info: {% for host in groups['eos'] %} Hostname: {{ hostvars[host].ansible_net_hostname }} Version: {{ hostvars[host].ansible_net_version }} Model: {{ hostvars[host].ansible_net_model }} Serial: {{ hostvars[host].ansible_net_serialnum }} {% endfor %} IOS device info: {% for host in groups['ios'] %} Hostname: {{ hostvars[host].ansible_net_hostname }} Version: {{ hostvars[host].ansible_net_version }} Model: {{ hostvars[host].ansible_net_model }} Serial: {{ hostvars[host].ansible_net_serialnum }} {% endfor %} VyOS device info: {% for host in groups['vyos'] %} Hostname: {{ hostvars[host].ansible_net_hostname }} Version: {{ hostvars[host].ansible_net_version }} Model: {{ hostvars[host].ansible_net_model }} Serial: {{ hostvars[host].ansible_net_serialnum }} {% endfor %} dest: /tmp/switch-facts run_once: yes ### # Get running configuration # - name: Backup switch (eos) eos_config: backup: yes register: backup_eos_location when: ansible_network_os == 'eos' - name: backup switch (vyos) vyos_config: backup: yes register: backup_vyos_location when: ansible_network_os == 'vyos' - name: Create backup dir file: path: "/tmp/backups/{{ inventory_hostname }}" state: directory recurse: yes - name: Copy backup files into /tmp/backups/ (eos) copy: src: "{{ backup_eos_location.backup_path }}" dest: "/tmp/backups/{{ inventory_hostname }}/{{ inventory_hostname }}.bck" when: ansible_network_os == 'eos' - name: Copy backup files into /tmp/backups/ (vyos) copy: src: "{{ backup_vyos_location.backup_path }}" dest: "/tmp/backups/{{ inventory_hostname }}/{{ inventory_hostname }}.bck" when: ansible_network_os == 'vyos' Step 3: Running the playbook ---------------------------- To run the playbook, run the following from a console prompt: .. code-block:: console ansible-playbook -i inventory facts-demo.yml This should return output similar to the following: .. code-block:: console PLAY RECAP eos01.example.net : ok=7 changed=2 unreachable=0 failed=0 ios01.example.net : ok=7 changed=2 unreachable=0 failed=0 vyos01.example.net : ok=6 changed=2 unreachable=0 failed=0 Step 4: Examining the playbook results -------------------------------------- Next, look at the contents of the file we created containing the switch facts: .. code-block:: console cat /tmp/switch-facts You can also look at the backup files: .. code-block:: console find /tmp/backups If `ansible-playbook` fails, please follow the debug steps in :ref:`network_debug_troubleshooting`. .. _network-agnostic-examples: Example 2: simplifying playbooks with network agnostic modules ============================================================== (This example originally appeared in the `Deep Dive on cli_command for Network Automation <https://www.ansible.com/blog/deep-dive-on-cli-command-for-network-automation>`_ blog post by Sean Cavanaugh -`@IPvSean <https://github.com/IPvSean>`_). If you have two or more network platforms in your environment, you can use the network agnostic modules to simplify your playbooks. You can use network agnostic modules such as ``cli_command`` or ``cli_config`` in place of the platform-specific modules such as ``eos_config``, ``ios_config``, and ``junos_config``. This reduces the number of tasks and conditionals you need in your playbooks. .. note:: Network agnostic modules require the :ref:`network_cli <network_cli_connection>` connection plugin. Sample playbook with platform-specific modules ---------------------------------------------- This example assumes three platforms, Arista EOS, Cisco NXOS, and Juniper JunOS. Without the network agnostic modules, a sample playbook might contain the following three tasks with platform-specific commands: .. code-block:: yaml --- - name: Run Arista command eos_command: commands: show ip int br when: ansible_network_os == 'eos' - name: Run Cisco NXOS command nxos_command: commands: show ip int br when: ansible_network_os == 'nxos' - name: Run Vyos command vyos_command: commands: show interface when: ansible_network_os == 'vyos' Simplified playbook with ``cli_command`` network agnostic module ---------------------------------------------------------------- You can replace these platform-specific modules with the network agnostic ``cli_command`` module as follows: .. code-block:: yaml --- - hosts: network gather_facts: false connection: network_cli tasks: - name: Run cli_command on Arista and display results block: - name: Run cli_command on Arista cli_command: command: show ip int br register: result - name: Display result to terminal window debug: var: result.stdout_lines when: ansible_network_os == 'eos' - name: Run cli_command on Cisco IOS and display results block: - name: Run cli_command on Cisco IOS cli_command: command: show ip int br register: result - name: Display result to terminal window debug: var: result.stdout_lines when: ansible_network_os == 'ios' - name: Run cli_command on Vyos and display results block: - name: Run cli_command on Vyos cli_command: command: show interfaces register: result - name: Display result to terminal window debug: var: result.stdout_lines when: ansible_network_os == 'vyos' If you use groups and group_vars by platform type, this playbook can be further simplified to : .. code-block:: yaml --- - name: Run command and print to terminal window hosts: routers gather_facts: false tasks: - name: Run show command cli_command: command: "{{show_interfaces}}" register: command_output You can see a full example of this using group_vars and also a configuration backup example at `Network agnostic examples <https://github.com/network-automation/agnostic_example>`_. Using multiple prompts with the ``cli_command`` ------------------------------------------------ The ``cli_command`` also supports multiple prompts. .. code-block:: yaml --- - name: Change password to default cli_command: command: "{{ item }}" prompt: - "New password" - "Retype new password" answer: - "mypassword123" - "mypassword123" check_all: True loop: - "configure" - "rollback" - "set system root-authentication plain-text-password" - "commit" See the :ref:`cli_command <cli_command_module>` for full documentation on this command. Implementation Notes ==================== Demo variables -------------- Although these tasks are not needed to write data to disk, they are used in this example to demonstrate some methods of accessing facts about the given devices or a named host. Ansible ``hostvars`` allows you to access variables from a named host. Without this we would return the details for the current host, rather than the named host. For more information, see :ref:`magic_variables_and_hostvars`. Get running configuration ------------------------- The :ref:`eos_config <eos_config_module>` and :ref:`vyos_config <vyos_config_module>` modules have a ``backup:`` option that when set will cause the module to create a full backup of the current ``running-config`` from the remote device before any changes are made. The backup file is written to the ``backup`` folder in the playbook root directory. If the directory does not exist, it is created. To demonstrate how we can move the backup file to a different location, we register the result and move the file to the path stored in ``backup_path``. Note that when using variables from tasks in this way we use double quotes (``"``) and double curly-brackets (``{{...}}`` to tell Ansible that this is a variable. Troubleshooting =============== If you receive an connection error please double check the inventory and playbook for typos or missing lines. If the issue still occurs follow the debug steps in :ref:`network_debug_troubleshooting`. .. seealso:: * :ref:`network_guide` * :ref:`intro_inventory` * :ref:`Keeping vaulted variables visible <tip_for_variables_and_vaults>`
closed
ansible/ansible
https://github.com/ansible/ansible
62,370
Deprecation warning about bare variable (list) misleading suggestion "| bool"
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY Use Ansible 2.8 with a condition using a list variable. Witness deprecation warning: [DEPRECATION WARNING]: evaluating [] as a bare variable, this behaviour will go away and you might need to add |bool to the expression in the future. Also see CONDITIONAL_BARE_VARS configuration toggle.. This feature will be removed in version 2.12. "add |bool" is the wrong thing to do here most likely, because: https://medium.com/opsops/wft-bool-filter-in-ansible-e7e2fd7a148f ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME core? ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible 2.8.5 config file = /home/gert/ansible/ansible.cfg configured module search path = ['/home/gert/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/gert/.local/share/virtualenvs/ansible-BCbjBVJM/lib/python3.6/site-packages/ansible executable location = /home/gert/.local/share/virtualenvs/ansible-BCbjBVJM/bin/ansible python version = 3.6.8 (default, Jan 14 2019, 11:02:34) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ``` ANSIBLE_PIPELINING(/home/gert/ansible/ansible.cfg) = True ANSIBLE_SSH_ARGS(/home/gert/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=yes -o GlobalKnownHostsFile=./files/openssh-client/ssh_known_hosts -o UserKnownHostsFile=/dev/null CACHE_PLUGIN(/home/gert/ansible/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/home/gert/ansible/ansible.cfg) = ~/.ansible/fact_cache CACHE_PLUGIN_TIMEOUT(/home/gert/ansible/ansible.cfg) = 86400 DEFAULT_GATHERING(/home/gert/ansible/ansible.cfg) = smart DEFAULT_HOST_LIST(/home/gert/ansible/ansible.cfg) = ['/home/gert/ansible/inventory'] DEFAULT_LOAD_CALLBACK_PLUGINS(/home/gert/ansible/ansible.cfg) = False DEFAULT_STDOUT_CALLBACK(/home/gert/ansible/ansible.cfg) = yaml DEFAULT_STRATEGY(/home/gert/ansible/ansible.cfg) = mitogen_linear DEFAULT_STRATEGY_PLUGIN_PATH(/home/gert/ansible/ansible.cfg) = ['/home/gert/ansible/strategy_plugins'] ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> Run the playbook below with `ansible-playbook` with version 2.8.x. <!--- Paste example playbooks or commands between quotes below --> ```yaml - hosts: localhost vars: myvar: [] tasks: - debug: msg=foo when: myvar ``` Witness the misleading solution to the deprecation warning as depicted in the summary. <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> I would have expected to see a suggestion about validating the length of the list, e.g.: " add '| length > 0'" instead of " add |bool" ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ansible-playbook --check test.yml PLAY [localhost] *************************************************************************************************************************************************************************************************************************************************************************************************************************************************** TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************************************************************************************************************************************* ok: [localhost] TASK [debug] ******************************************************************************************************************************************************************************************************************************************************************************************************************************************************* [DEPRECATION WARNING]: evaluating [] as a bare variable, this behaviour will go away and you might need to add |bool to the expression in the future. Also see CONDITIONAL_BARE_VARS configuration toggle.. This feature will be removed in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. skipping: [localhost] PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************************************************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ```
https://github.com/ansible/ansible/issues/62370
https://github.com/ansible/ansible/pull/70687
ea58d7c2334c3da15ad8989be12c0909ee294369
b87944926d887e6259e4e64bf9e8be596b3721a3
2019-09-16T22:50:57Z
python
2020-08-18T22:55:30Z
changelogs/fragments/70687-improve-deprecation-message-bare-variable.yaml
closed
ansible/ansible
https://github.com/ansible/ansible
62,370
Deprecation warning about bare variable (list) misleading suggestion "| bool"
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY Use Ansible 2.8 with a condition using a list variable. Witness deprecation warning: [DEPRECATION WARNING]: evaluating [] as a bare variable, this behaviour will go away and you might need to add |bool to the expression in the future. Also see CONDITIONAL_BARE_VARS configuration toggle.. This feature will be removed in version 2.12. "add |bool" is the wrong thing to do here most likely, because: https://medium.com/opsops/wft-bool-filter-in-ansible-e7e2fd7a148f ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME core? ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible 2.8.5 config file = /home/gert/ansible/ansible.cfg configured module search path = ['/home/gert/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/gert/.local/share/virtualenvs/ansible-BCbjBVJM/lib/python3.6/site-packages/ansible executable location = /home/gert/.local/share/virtualenvs/ansible-BCbjBVJM/bin/ansible python version = 3.6.8 (default, Jan 14 2019, 11:02:34) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ``` ANSIBLE_PIPELINING(/home/gert/ansible/ansible.cfg) = True ANSIBLE_SSH_ARGS(/home/gert/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=yes -o GlobalKnownHostsFile=./files/openssh-client/ssh_known_hosts -o UserKnownHostsFile=/dev/null CACHE_PLUGIN(/home/gert/ansible/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/home/gert/ansible/ansible.cfg) = ~/.ansible/fact_cache CACHE_PLUGIN_TIMEOUT(/home/gert/ansible/ansible.cfg) = 86400 DEFAULT_GATHERING(/home/gert/ansible/ansible.cfg) = smart DEFAULT_HOST_LIST(/home/gert/ansible/ansible.cfg) = ['/home/gert/ansible/inventory'] DEFAULT_LOAD_CALLBACK_PLUGINS(/home/gert/ansible/ansible.cfg) = False DEFAULT_STDOUT_CALLBACK(/home/gert/ansible/ansible.cfg) = yaml DEFAULT_STRATEGY(/home/gert/ansible/ansible.cfg) = mitogen_linear DEFAULT_STRATEGY_PLUGIN_PATH(/home/gert/ansible/ansible.cfg) = ['/home/gert/ansible/strategy_plugins'] ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> Run the playbook below with `ansible-playbook` with version 2.8.x. <!--- Paste example playbooks or commands between quotes below --> ```yaml - hosts: localhost vars: myvar: [] tasks: - debug: msg=foo when: myvar ``` Witness the misleading solution to the deprecation warning as depicted in the summary. <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> I would have expected to see a suggestion about validating the length of the list, e.g.: " add '| length > 0'" instead of " add |bool" ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ansible-playbook --check test.yml PLAY [localhost] *************************************************************************************************************************************************************************************************************************************************************************************************************************************************** TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************************************************************************************************************************************* ok: [localhost] TASK [debug] ******************************************************************************************************************************************************************************************************************************************************************************************************************************************************* [DEPRECATION WARNING]: evaluating [] as a bare variable, this behaviour will go away and you might need to add |bool to the expression in the future. Also see CONDITIONAL_BARE_VARS configuration toggle.. This feature will be removed in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. skipping: [localhost] PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************************************************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ```
https://github.com/ansible/ansible/issues/62370
https://github.com/ansible/ansible/pull/70687
ea58d7c2334c3da15ad8989be12c0909ee294369
b87944926d887e6259e4e64bf9e8be596b3721a3
2019-09-16T22:50:57Z
python
2020-08-18T22:55:30Z
lib/ansible/playbook/conditional.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import ast import re from jinja2.compiler import generate from jinja2.exceptions import UndefinedError from ansible import constants as C from ansible.errors import AnsibleError, AnsibleUndefinedVariable from ansible.module_utils.six import text_type from ansible.module_utils._text import to_native from ansible.playbook.attribute import FieldAttribute from ansible.utils.display import Display display = Display() DEFINED_REGEX = re.compile(r'(hostvars\[.+\]|[\w_]+)\s+(not\s+is|is|is\s+not)\s+(defined|undefined)') LOOKUP_REGEX = re.compile(r'lookup\s*\(') VALID_VAR_REGEX = re.compile("^[_A-Za-z][_a-zA-Z0-9]*$") class Conditional: ''' This is a mix-in class, to be used with Base to allow the object to be run conditionally when a condition is met or skipped. ''' _when = FieldAttribute(isa='list', default=list, extend=True, prepend=True) def __init__(self, loader=None): # when used directly, this class needs a loader, but we want to # make sure we don't trample on the existing one if this class # is used as a mix-in with a playbook base class if not hasattr(self, '_loader'): if loader is None: raise AnsibleError("a loader must be specified when using Conditional() directly") else: self._loader = loader super(Conditional, self).__init__() def _validate_when(self, attr, name, value): if not isinstance(value, list): setattr(self, name, [value]) def extract_defined_undefined(self, conditional): results = [] cond = conditional m = DEFINED_REGEX.search(cond) while m: results.append(m.groups()) cond = cond[m.end():] m = DEFINED_REGEX.search(cond) return results def evaluate_conditional(self, templar, all_vars): ''' Loops through the conditionals set on this object, returning False if any of them evaluate as such. ''' # since this is a mix-in, it may not have an underlying datastructure # associated with it, so we pull it out now in case we need it for # error reporting below ds = None if hasattr(self, '_ds'): ds = getattr(self, '_ds') try: for conditional in self.when: if not self._check_conditional(conditional, templar, all_vars): return False except Exception as e: raise AnsibleError( "The conditional check '%s' failed. The error was: %s" % (to_native(conditional), to_native(e)), obj=ds ) return True def _check_conditional(self, conditional, templar, all_vars): ''' This method does the low-level evaluation of each conditional set on this object, using jinja2 to wrap the conditionals for evaluation. ''' original = conditional if conditional is None or conditional == '': return True # this allows for direct boolean assignments to conditionals "when: False" if isinstance(conditional, bool): return conditional if templar.is_template(conditional): display.warning('conditional statements should not include jinja2 ' 'templating delimiters such as {{ }} or {%% %%}. ' 'Found: %s' % conditional) bare_vars_warning = False if C.CONDITIONAL_BARE_VARS: if conditional in all_vars and VALID_VAR_REGEX.match(conditional): conditional = all_vars[conditional] bare_vars_warning = True # make sure the templar is using the variables specified with this method templar.available_variables = all_vars try: # if the conditional is "unsafe", disable lookups disable_lookups = hasattr(conditional, '__UNSAFE__') conditional = templar.template(conditional, disable_lookups=disable_lookups) if bare_vars_warning and not isinstance(conditional, bool): display.deprecated('evaluating %r as a bare variable, this behaviour will go away and you might need to add |bool' ' to the expression in the future. Also see CONDITIONAL_BARE_VARS configuration toggle' % original, version="2.12", collection_name='ansible.builtin') if not isinstance(conditional, text_type) or conditional == "": return conditional # update the lookups flag, as the string returned above may now be unsafe # and we don't want future templating calls to do unsafe things disable_lookups |= hasattr(conditional, '__UNSAFE__') # First, we do some low-level jinja2 parsing involving the AST format of the # statement to ensure we don't do anything unsafe (using the disable_lookup flag above) class CleansingNodeVisitor(ast.NodeVisitor): def generic_visit(self, node, inside_call=False, inside_yield=False): if isinstance(node, ast.Call): inside_call = True elif isinstance(node, ast.Yield): inside_yield = True elif isinstance(node, ast.Str): if disable_lookups: if inside_call and node.s.startswith("__"): # calling things with a dunder is generally bad at this point... raise AnsibleError( "Invalid access found in the conditional: '%s'" % conditional ) elif inside_yield: # we're inside a yield, so recursively parse and traverse the AST # of the result to catch forbidden syntax from executing parsed = ast.parse(node.s, mode='exec') cnv = CleansingNodeVisitor() cnv.visit(parsed) # iterate over all child nodes for child_node in ast.iter_child_nodes(node): self.generic_visit( child_node, inside_call=inside_call, inside_yield=inside_yield ) try: e = templar.environment.overlay() e.filters.update(templar.environment.filters) e.tests.update(templar.environment.tests) res = e._parse(conditional, None, None) res = generate(res, e, None, None) parsed = ast.parse(res, mode='exec') cnv = CleansingNodeVisitor() cnv.visit(parsed) except Exception as e: raise AnsibleError("Invalid conditional detected: %s" % to_native(e)) # and finally we generate and template the presented string and look at the resulting string presented = "{%% if %s %%} True {%% else %%} False {%% endif %%}" % conditional val = templar.template(presented, disable_lookups=disable_lookups).strip() if val == "True": return True elif val == "False": return False else: raise AnsibleError("unable to evaluate conditional: %s" % original) except (AnsibleUndefinedVariable, UndefinedError) as e: # the templating failed, meaning most likely a variable was undefined. If we happened # to be looking for an undefined variable, return True, otherwise fail try: # first we extract the variable name from the error message var_name = re.compile(r"'(hostvars\[.+\]|[\w_]+)' is undefined").search(str(e)).groups()[0] # next we extract all defined/undefined tests from the conditional string def_undef = self.extract_defined_undefined(conditional) # then we loop through these, comparing the error variable name against # each def/undef test we found above. If there is a match, we determine # whether the logic/state mean the variable should exist or not and return # the corresponding True/False for (du_var, logic, state) in def_undef: # when we compare the var names, normalize quotes because something # like hostvars['foo'] may be tested against hostvars["foo"] if var_name.replace("'", '"') == du_var.replace("'", '"'): # the should exist is a xor test between a negation in the logic portion # against the state (defined or undefined) should_exist = ('not' in logic) != (state == 'defined') if should_exist: return False else: return True # as nothing above matched the failed var name, re-raise here to # trigger the AnsibleUndefinedVariable exception again below raise except Exception: raise AnsibleUndefinedVariable("error while evaluating conditional (%s): %s" % (original, e))
closed
ansible/ansible
https://github.com/ansible/ansible
61,681
Kubernetes scenario guide out of date
<!--- Verify first that your improvement is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below, add suggestions to wording or structure --> This scenario guide is out of date: https://docs.ansible.com/ansible/latest/scenario_guides/guide_kubernetes.html It refers to a requirement to use a manually built version of ansible and links to galaxy roles ( https://galaxy.ansible.com/ansible/kubernetes-modules ) that exist but are out of date and their source repository ( https://github.com/ansible/ansible-kubernetes-modules ) has been archived, because these modules are now available in ansible itself since Ansible 2.5. My suggestion would be to remove this file entirely as it's no-longer required in currently supported versions of ansible; Or perhaps replace it with a short page containing a link to the k8s_raw and openshift_raw modules that were added to Ansible 2.5. ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME guide_kubernetes.rst ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below N/A - entire page out of date for ansible >= 2.5 ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below N/A - entire page out of date for ansible >= 2.5 ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. OS version, browser, etc. --> ##### ADDITIONAL INFORMATION <!--- Describe how this improves the documentation, e.g. before/after situation or screenshots --> I found this documentation via a search in the sidebar of the docs site. After reading and following the links I realised that I should disregard it for modern versions of ansible. Simply not having the page there would be an improvement. <!--- HINT: You can paste gist.github.com links for larger files -->
https://github.com/ansible/ansible/issues/61681
https://github.com/ansible/ansible/pull/71372
5498b0bb7182d59996fedbfa39a4d3fa825f2305
59b80b9146765382f7fbbeefe401fe33b0df033b
2019-09-02T11:14:12Z
python
2020-08-21T14:37:03Z
docs/docsite/rst/scenario_guides/guide_kubernetes.rst
Kubernetes and OpenShift Guide ============================== Modules for interacting with the Kubernetes (K8s) and OpenShift API are under development, and can be used in preview mode. To use them, review the requirements, and then follow the installation and use instructions. Requirements ------------ To use the modules, you'll need the following: - Run Ansible from source. For assistance, view :ref:`from_source`. - `OpenShift Rest Client <https://github.com/openshift/openshift-restclient-python>`_ installed on the host that will execute the modules Installation and use -------------------- The individual modules, as of this writing, are not part of the Ansible repository, but they can be accessed by installing the role, `ansible.kubernetes-modules <https://galaxy.ansible.com/ansible/kubernetes-modules/>`_, and including it in a playbook. To install, run the following: .. code-block:: bash $ ansible-galaxy install ansible.kubernetes-modules Next, include it in a playbook, as follows: .. code-block:: bash --- - hosts: localhost remote_user: root roles: - role: ansible.kubernetes-modules - role: hello-world Because the role is referenced, ``hello-world`` is able to access the modules, and use them to deploy an application. The modules are found in the ``library`` folder of the role. Each includes full documentation for parameters and the returned data structure. However, not all modules include examples, only those where `testing data <https://github.com/openshift/openshift-restclient-python/tree/release-0.8/openshift/ansiblegen/examples>`_ has been created. Authenticating with the API --------------------------- By default the OpenShift Rest Client will look for ``~/.kube/config``, and if found, connect using the active context. You can override the location of the file using the``kubeconfig`` parameter, and the context, using the ``context`` parameter. Basic authentication is also supported using the ``username`` and ``password`` options. You can override the URL using the ``host`` parameter. Certificate authentication works through the ``ssl_ca_cert``, ``cert_file``, and ``key_file`` parameters, and for token authentication, use the ``api_key`` parameter. To disable SSL certificate verification, set ``verify_ssl`` to false. Filing issues ````````````` If you find a bug or have a suggestion regarding individual modules or the role, please file issues at `OpenShift Rest Client issues <https://github.com/openshift/openshift-restclient-python/issues>`_. There is also a utility module, k8s_common.py, that is part of the `Ansible <https://github.com/ansible/ansible>`_ repo. If you find a bug or have suggestions regarding it, please file issues at `Ansible issues <https://github.com/ansible/ansible/issues>`_.
closed
ansible/ansible
https://github.com/ansible/ansible
64,558
Tags doesn't work properly with module meta
##### SUMMARY Module meta execute every time, even if the action has a different tag specified. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME meta tasks (end_host, end_play) ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible 2.8.5 config file = /root/3-ansible-modul/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ``` CACHE_PLUGIN(/root/3-ansible-modul/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/root/3-ansible-modul/ansible.cfg) = /tmp/facts_cache CACHE_PLUGIN_TIMEOUT(/root/3-ansible-modul/ansible.cfg) = 7200 DEFAULT_GATHERING(/root/3-ansible-modul/ansible.cfg) = smart DEFAULT_HOST_LIST(/root/3-ansible-modul/ansible.cfg) = [u'/root/3-ansible-modul/multinode'] ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> vm1_main (ansible host): CentOS Linux release 7.6.1810 (Core) vm2 and vm3: CentOS Linux release 7.5.1804 (Core) ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> Run playbook below with tag "a". <!--- Paste example playbooks or commands between quotes below --> ``` # cat playbook.yml --- - hosts: all remote_user: root gather_facts: yes tasks: - name: "a1" debug: msg: "a1" tags: a - name: "modul" debug: msg: "modul" tags: modul - name: "modul - endhost" meta: end_host tags: modul - name: "a2" debug: msg: "a2" tags: a - name: "modul2" debug: msg: "modul2" tags: modul ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> Task "a2" should be also executed, because meta:end_host is under different tag. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> No matter what meta task i use, it always execute even if its action is specified within different tag. <!--- Paste verbatim command output between quotes --> ``` # ansible-playbook playbook.yml -t a -vvv PLAYBOOK: playbook.yml ********************************************************************************************** 1 plays in playbook.yml PLAY [all] ********************************************************************************************************** META: ran handlers TASK [a1] *********************************************************************************************************** task path: /root/3-ansible-modul/playbook.yml:7 ok: [vm1_main] => { "msg": "a1" } ok: [vm2] => { "msg": "a1" } ok: [vm3] => { "msg": "a1" } META: ending play for vm1_main META: ending play for vm2 META: ending play for vm3 PLAY RECAP ********************************************************************************************************** vm1_main : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 vm2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 vm3 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ```
https://github.com/ansible/ansible/issues/64558
https://github.com/ansible/ansible/pull/67508
59b80b9146765382f7fbbeefe401fe33b0df033b
1425e3597be3c186ae50a7e488bc1f61c85a274f
2019-11-07T12:20:24Z
python
2020-08-21T15:08:49Z
changelogs/fragments/67508-meta-task-tags.yaml
closed
ansible/ansible
https://github.com/ansible/ansible
64,558
Tags doesn't work properly with module meta
##### SUMMARY Module meta execute every time, even if the action has a different tag specified. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME meta tasks (end_host, end_play) ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible 2.8.5 config file = /root/3-ansible-modul/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ``` CACHE_PLUGIN(/root/3-ansible-modul/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/root/3-ansible-modul/ansible.cfg) = /tmp/facts_cache CACHE_PLUGIN_TIMEOUT(/root/3-ansible-modul/ansible.cfg) = 7200 DEFAULT_GATHERING(/root/3-ansible-modul/ansible.cfg) = smart DEFAULT_HOST_LIST(/root/3-ansible-modul/ansible.cfg) = [u'/root/3-ansible-modul/multinode'] ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> vm1_main (ansible host): CentOS Linux release 7.6.1810 (Core) vm2 and vm3: CentOS Linux release 7.5.1804 (Core) ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> Run playbook below with tag "a". <!--- Paste example playbooks or commands between quotes below --> ``` # cat playbook.yml --- - hosts: all remote_user: root gather_facts: yes tasks: - name: "a1" debug: msg: "a1" tags: a - name: "modul" debug: msg: "modul" tags: modul - name: "modul - endhost" meta: end_host tags: modul - name: "a2" debug: msg: "a2" tags: a - name: "modul2" debug: msg: "modul2" tags: modul ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> Task "a2" should be also executed, because meta:end_host is under different tag. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> No matter what meta task i use, it always execute even if its action is specified within different tag. <!--- Paste verbatim command output between quotes --> ``` # ansible-playbook playbook.yml -t a -vvv PLAYBOOK: playbook.yml ********************************************************************************************** 1 plays in playbook.yml PLAY [all] ********************************************************************************************************** META: ran handlers TASK [a1] *********************************************************************************************************** task path: /root/3-ansible-modul/playbook.yml:7 ok: [vm1_main] => { "msg": "a1" } ok: [vm2] => { "msg": "a1" } ok: [vm3] => { "msg": "a1" } META: ending play for vm1_main META: ending play for vm2 META: ending play for vm3 PLAY RECAP ********************************************************************************************************** vm1_main : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 vm2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 vm3 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ```
https://github.com/ansible/ansible/issues/64558
https://github.com/ansible/ansible/pull/67508
59b80b9146765382f7fbbeefe401fe33b0df033b
1425e3597be3c186ae50a7e488bc1f61c85a274f
2019-11-07T12:20:24Z
python
2020-08-21T15:08:49Z
docs/docsite/rst/porting_guides/porting_guide_base_2.11.rst
.. _porting_2.11_guide_base: ******************************* Ansible-base 2.11 Porting Guide ******************************* This section discusses the behavioral changes between Ansible-base 2.10 and Ansible-base 2.11. It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible-base. We suggest you read this page along with the `Ansible-base Changelog for 2.11 <https://github.com/ansible/ansible/blob/stable-2.11/changelogs/CHANGELOG-v2.11.rst>`_ to understand what updates you may need to make. Ansible-base is mainly of interest for developers and users who only want to use a small, controlled subset of the available collections. Regular users should install ansible. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`. .. contents:: Playbook ======== No notable changes Command Line ============ No notable changes Deprecated ========== No notable changes Modules ======= * The ``apt_key`` module has explicitly defined ``file`` as mutually exclusive with ``data``, ``keyserver`` and ``url``. They cannot be used together anymore. Modules removed --------------- The following modules no longer exist: * No notable changes Deprecation notices ------------------- No notable changes Noteworthy module changes ------------------------- * facts - On NetBSD, ``ansible_virtualization_type`` now tries to report a more accurate result than ``xen`` when virtualized and not running on Xen. * facts - Virtualization facts now include ``virtualization_tech_guest`` and ``virtualization_tech_host`` keys. These are lists of virtualization technologies that a guest is a part of, or that a host provides, respectively. As an example, a host may be set up to provide both KVM and VirtualBox, and these will be included in ``virtualization_tech_host``, and a podman container running on a VM powered by KVM will have a ``virtualization_tech_guest`` of ``["kvm", "podman", "container"]``. Plugins ======= * inventory plugins - ``CachePluginAdjudicator.flush()`` now calls the underlying cache plugin's ``flush()`` instead of only deleting keys that it knows about. Inventory plugins should use ``delete()`` to remove any specific keys. As a user, this means that when an inventory plugin calls its ``clear_cache()`` method, facts could also be flushed from the cache. To work around this, users can configure inventory plugins to use a cache backend that is independent of the facts cache. * callback plugins - ``meta`` task execution is now sent to ``v2_playbook_on_task_start`` like any other task. By default, only explicit meta tasks are sent there. Callback plugins can opt-in to receiving internal, implicitly created tasks to act on those as well, as noted in the plugin development documentation. Porting custom scripts ====================== No notable changes
closed
ansible/ansible
https://github.com/ansible/ansible
64,558
Tags doesn't work properly with module meta
##### SUMMARY Module meta execute every time, even if the action has a different tag specified. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME meta tasks (end_host, end_play) ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible 2.8.5 config file = /root/3-ansible-modul/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ``` CACHE_PLUGIN(/root/3-ansible-modul/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/root/3-ansible-modul/ansible.cfg) = /tmp/facts_cache CACHE_PLUGIN_TIMEOUT(/root/3-ansible-modul/ansible.cfg) = 7200 DEFAULT_GATHERING(/root/3-ansible-modul/ansible.cfg) = smart DEFAULT_HOST_LIST(/root/3-ansible-modul/ansible.cfg) = [u'/root/3-ansible-modul/multinode'] ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> vm1_main (ansible host): CentOS Linux release 7.6.1810 (Core) vm2 and vm3: CentOS Linux release 7.5.1804 (Core) ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> Run playbook below with tag "a". <!--- Paste example playbooks or commands between quotes below --> ``` # cat playbook.yml --- - hosts: all remote_user: root gather_facts: yes tasks: - name: "a1" debug: msg: "a1" tags: a - name: "modul" debug: msg: "modul" tags: modul - name: "modul - endhost" meta: end_host tags: modul - name: "a2" debug: msg: "a2" tags: a - name: "modul2" debug: msg: "modul2" tags: modul ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> Task "a2" should be also executed, because meta:end_host is under different tag. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> No matter what meta task i use, it always execute even if its action is specified within different tag. <!--- Paste verbatim command output between quotes --> ``` # ansible-playbook playbook.yml -t a -vvv PLAYBOOK: playbook.yml ********************************************************************************************** 1 plays in playbook.yml PLAY [all] ********************************************************************************************************** META: ran handlers TASK [a1] *********************************************************************************************************** task path: /root/3-ansible-modul/playbook.yml:7 ok: [vm1_main] => { "msg": "a1" } ok: [vm2] => { "msg": "a1" } ok: [vm3] => { "msg": "a1" } META: ending play for vm1_main META: ending play for vm2 META: ending play for vm3 PLAY RECAP ********************************************************************************************************** vm1_main : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 vm2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 vm3 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ```
https://github.com/ansible/ansible/issues/64558
https://github.com/ansible/ansible/pull/67508
59b80b9146765382f7fbbeefe401fe33b0df033b
1425e3597be3c186ae50a7e488bc1f61c85a274f
2019-11-07T12:20:24Z
python
2020-08-21T15:08:49Z
lib/ansible/cli/playbook.py
# (c) 2012, Michael DeHaan <[email protected]> # Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import os import stat from ansible import context from ansible.cli import CLI from ansible.cli.arguments import option_helpers as opt_help from ansible.errors import AnsibleError from ansible.executor.playbook_executor import PlaybookExecutor from ansible.module_utils._text import to_bytes from ansible.playbook.block import Block from ansible.utils.display import Display from ansible.utils.collection_loader import AnsibleCollectionConfig from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path from ansible.plugins.loader import add_all_plugin_dirs display = Display() class PlaybookCLI(CLI): ''' the tool to run *Ansible playbooks*, which are a configuration and multinode deployment system. See the project home page (https://docs.ansible.com) for more information. ''' def init_parser(self): # create parser for CLI options super(PlaybookCLI, self).init_parser( usage="%prog [options] playbook.yml [playbook2 ...]", desc="Runs Ansible playbooks, executing the defined tasks on the targeted hosts.") opt_help.add_connect_options(self.parser) opt_help.add_meta_options(self.parser) opt_help.add_runas_options(self.parser) opt_help.add_subset_options(self.parser) opt_help.add_check_options(self.parser) opt_help.add_inventory_options(self.parser) opt_help.add_runtask_options(self.parser) opt_help.add_vault_options(self.parser) opt_help.add_fork_options(self.parser) opt_help.add_module_options(self.parser) # ansible playbook specific opts self.parser.add_argument('--list-tasks', dest='listtasks', action='store_true', help="list all tasks that would be executed") self.parser.add_argument('--list-tags', dest='listtags', action='store_true', help="list all available tags") self.parser.add_argument('--step', dest='step', action='store_true', help="one-step-at-a-time: confirm each task before running") self.parser.add_argument('--start-at-task', dest='start_at_task', help="start the playbook at the task matching this name") self.parser.add_argument('args', help='Playbook(s)', metavar='playbook', nargs='+') def post_process_args(self, options): options = super(PlaybookCLI, self).post_process_args(options) display.verbosity = options.verbosity self.validate_conflicts(options, runas_opts=True, fork_opts=True) return options def run(self): super(PlaybookCLI, self).run() # Note: slightly wrong, this is written so that implicit localhost # manages passwords sshpass = None becomepass = None passwords = {} # initial error check, to make sure all specified playbooks are accessible # before we start running anything through the playbook executor b_playbook_dirs = [] for playbook in context.CLIARGS['args']: if not os.path.exists(playbook): raise AnsibleError("the playbook: %s could not be found" % playbook) if not (os.path.isfile(playbook) or stat.S_ISFIFO(os.stat(playbook).st_mode)): raise AnsibleError("the playbook: %s does not appear to be a file" % playbook) b_playbook_dir = os.path.dirname(os.path.abspath(to_bytes(playbook, errors='surrogate_or_strict'))) # load plugins from all playbooks in case they add callbacks/inventory/etc add_all_plugin_dirs(b_playbook_dir) b_playbook_dirs.append(b_playbook_dir) AnsibleCollectionConfig.playbook_paths = b_playbook_dirs playbook_collection = _get_collection_name_from_path(b_playbook_dirs[0]) if playbook_collection: display.warning("running playbook inside collection {0}".format(playbook_collection)) AnsibleCollectionConfig.default_collection = playbook_collection # don't deal with privilege escalation or passwords when we don't need to if not (context.CLIARGS['listhosts'] or context.CLIARGS['listtasks'] or context.CLIARGS['listtags'] or context.CLIARGS['syntax']): (sshpass, becomepass) = self.ask_passwords() passwords = {'conn_pass': sshpass, 'become_pass': becomepass} # create base objects loader, inventory, variable_manager = self._play_prereqs() # (which is not returned in list_hosts()) is taken into account for # warning if inventory is empty. But it can't be taken into account for # checking if limit doesn't match any hosts. Instead we don't worry about # limit if only implicit localhost was in inventory to start with. # # Fix this when we rewrite inventory by making localhost a real host (and thus show up in list_hosts()) CLI.get_host_list(inventory, context.CLIARGS['subset']) # flush fact cache if requested if context.CLIARGS['flush_cache']: self._flush_cache(inventory, variable_manager) # create the playbook executor, which manages running the plays via a task queue manager pbex = PlaybookExecutor(playbooks=context.CLIARGS['args'], inventory=inventory, variable_manager=variable_manager, loader=loader, passwords=passwords) results = pbex.run() if isinstance(results, list): for p in results: display.display('\nplaybook: %s' % p['playbook']) for idx, play in enumerate(p['plays']): if play._included_path is not None: loader.set_basedir(play._included_path) else: pb_dir = os.path.realpath(os.path.dirname(p['playbook'])) loader.set_basedir(pb_dir) msg = "\n play #%d (%s): %s" % (idx + 1, ','.join(play.hosts), play.name) mytags = set(play.tags) msg += '\tTAGS: [%s]' % (','.join(mytags)) if context.CLIARGS['listhosts']: playhosts = set(inventory.get_hosts(play.hosts)) msg += "\n pattern: %s\n hosts (%d):" % (play.hosts, len(playhosts)) for host in playhosts: msg += "\n %s" % host display.display(msg) all_tags = set() if context.CLIARGS['listtags'] or context.CLIARGS['listtasks']: taskmsg = '' if context.CLIARGS['listtasks']: taskmsg = ' tasks:\n' def _process_block(b): taskmsg = '' for task in b.block: if isinstance(task, Block): taskmsg += _process_block(task) else: if task.action == 'meta': continue all_tags.update(task.tags) if context.CLIARGS['listtasks']: cur_tags = list(mytags.union(set(task.tags))) cur_tags.sort() if task.name: taskmsg += " %s" % task.get_name() else: taskmsg += " %s" % task.action taskmsg += "\tTAGS: [%s]\n" % ', '.join(cur_tags) return taskmsg all_vars = variable_manager.get_vars(play=play) for block in play.compile(): block = block.filter_tagged_tasks(all_vars) if not block.has_tasks(): continue taskmsg += _process_block(block) if context.CLIARGS['listtags']: cur_tags = list(mytags.union(all_tags)) cur_tags.sort() taskmsg += " TASK TAGS: [%s]\n" % ', '.join(cur_tags) display.display(taskmsg) return 0 else: return results @staticmethod def _flush_cache(inventory, variable_manager): for host in inventory.list_hosts(): hostname = host.get_name() variable_manager.clear_facts(hostname)
closed
ansible/ansible
https://github.com/ansible/ansible
64,558
Tags doesn't work properly with module meta
##### SUMMARY Module meta execute every time, even if the action has a different tag specified. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME meta tasks (end_host, end_play) ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible 2.8.5 config file = /root/3-ansible-modul/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ``` CACHE_PLUGIN(/root/3-ansible-modul/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/root/3-ansible-modul/ansible.cfg) = /tmp/facts_cache CACHE_PLUGIN_TIMEOUT(/root/3-ansible-modul/ansible.cfg) = 7200 DEFAULT_GATHERING(/root/3-ansible-modul/ansible.cfg) = smart DEFAULT_HOST_LIST(/root/3-ansible-modul/ansible.cfg) = [u'/root/3-ansible-modul/multinode'] ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> vm1_main (ansible host): CentOS Linux release 7.6.1810 (Core) vm2 and vm3: CentOS Linux release 7.5.1804 (Core) ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> Run playbook below with tag "a". <!--- Paste example playbooks or commands between quotes below --> ``` # cat playbook.yml --- - hosts: all remote_user: root gather_facts: yes tasks: - name: "a1" debug: msg: "a1" tags: a - name: "modul" debug: msg: "modul" tags: modul - name: "modul - endhost" meta: end_host tags: modul - name: "a2" debug: msg: "a2" tags: a - name: "modul2" debug: msg: "modul2" tags: modul ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> Task "a2" should be also executed, because meta:end_host is under different tag. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> No matter what meta task i use, it always execute even if its action is specified within different tag. <!--- Paste verbatim command output between quotes --> ``` # ansible-playbook playbook.yml -t a -vvv PLAYBOOK: playbook.yml ********************************************************************************************** 1 plays in playbook.yml PLAY [all] ********************************************************************************************************** META: ran handlers TASK [a1] *********************************************************************************************************** task path: /root/3-ansible-modul/playbook.yml:7 ok: [vm1_main] => { "msg": "a1" } ok: [vm2] => { "msg": "a1" } ok: [vm3] => { "msg": "a1" } META: ending play for vm1_main META: ending play for vm2 META: ending play for vm3 PLAY RECAP ********************************************************************************************************** vm1_main : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 vm2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 vm3 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ```
https://github.com/ansible/ansible/issues/64558
https://github.com/ansible/ansible/pull/67508
59b80b9146765382f7fbbeefe401fe33b0df033b
1425e3597be3c186ae50a7e488bc1f61c85a274f
2019-11-07T12:20:24Z
python
2020-08-21T15:08:49Z
lib/ansible/modules/meta.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright: (c) 2016, Ansible, a Red Hat company # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type DOCUMENTATION = r''' module: meta short_description: Execute Ansible 'actions' version_added: '1.2' description: - Meta tasks are a special kind of task which can influence Ansible internal execution or state. - Meta tasks can be used anywhere within your playbook. - This module is also supported for Windows targets. options: free_form: description: - This module takes a free form command, as a string. There is not an actual option named "free form". See the examples! - C(flush_handlers) makes Ansible run any handler tasks which have thus far been notified. Ansible inserts these tasks internally at certain points to implicitly trigger handler runs (after pre/post tasks, the final role execution, and the main tasks section of your plays). - C(refresh_inventory) (added in Ansible 2.0) forces the reload of the inventory, which in the case of dynamic inventory scripts means they will be re-executed. If the dynamic inventory script is using a cache, Ansible cannot know this and has no way of refreshing it (you can disable the cache or, if available for your specific inventory datasource (e.g. aws), you can use the an inventory plugin instead of an inventory script). This is mainly useful when additional hosts are created and users wish to use them instead of using the M(ansible.builtin.add_host) module. - C(noop) (added in Ansible 2.0) This literally does 'nothing'. It is mainly used internally and not recommended for general use. - C(clear_facts) (added in Ansible 2.1) causes the gathered facts for the hosts specified in the play's list of hosts to be cleared, including the fact cache. - C(clear_host_errors) (added in Ansible 2.1) clears the failed state (if any) from hosts specified in the play's list of hosts. - C(end_play) (added in Ansible 2.2) causes the play to end without failing the host(s). Note that this affects all hosts. - C(reset_connection) (added in Ansible 2.3) interrupts a persistent connection (i.e. ssh + control persist) - C(end_host) (added in Ansible 2.8) is a per-host variation of C(end_play). Causes the play to end for the current host without failing it. choices: [ clear_facts, clear_host_errors, end_host, end_play, flush_handlers, noop, refresh_inventory, reset_connection ] required: true notes: - C(meta) is not really a module nor action_plugin as such it cannot be overwritten. - C(clear_facts) will remove the persistent facts from M(ansible.builtin.set_fact) using C(cacheable=True), but not the current host variable it creates for the current run. - Looping on meta tasks is not supported. - Skipping C(meta) tasks using tags is not supported. - This module is also supported for Windows targets. seealso: - module: ansible.builtin.assert - module: ansible.builtin.fail author: - Ansible Core Team ''' EXAMPLES = r''' # Example showing flushing handlers on demand, not at end of play - template: src: new.j2 dest: /etc/config.txt notify: myhandler - name: Force all notified handlers to run at this point, not waiting for normal sync points meta: flush_handlers # Example showing how to refresh inventory during play - name: Reload inventory, useful with dynamic inventories when play makes changes to the existing hosts cloud_guest: # this is fake module name: newhost state: present - name: Refresh inventory to ensure new instances exist in inventory meta: refresh_inventory # Example showing how to clear all existing facts of targetted hosts - name: Clear gathered facts from all currently targeted hosts meta: clear_facts # Example showing how to continue using a failed target - name: Bring host back to play after failure copy: src: file dest: /etc/file remote_user: imightnothavepermission - meta: clear_host_errors # Example showing how to reset an existing connection - user: name: '{{ ansible_user }}' groups: input - name: Reset ssh connection to allow user changes to affect 'current login user' meta: reset_connection # Example showing how to end the play for specific targets - name: End the play for hosts that run CentOS 6 meta: end_host when: - ansible_distribution == 'CentOS' - ansible_distribution_major_version == '6' '''
closed
ansible/ansible
https://github.com/ansible/ansible
64,558
Tags doesn't work properly with module meta
##### SUMMARY Module meta execute every time, even if the action has a different tag specified. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME meta tasks (end_host, end_play) ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible 2.8.5 config file = /root/3-ansible-modul/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ``` CACHE_PLUGIN(/root/3-ansible-modul/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/root/3-ansible-modul/ansible.cfg) = /tmp/facts_cache CACHE_PLUGIN_TIMEOUT(/root/3-ansible-modul/ansible.cfg) = 7200 DEFAULT_GATHERING(/root/3-ansible-modul/ansible.cfg) = smart DEFAULT_HOST_LIST(/root/3-ansible-modul/ansible.cfg) = [u'/root/3-ansible-modul/multinode'] ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> vm1_main (ansible host): CentOS Linux release 7.6.1810 (Core) vm2 and vm3: CentOS Linux release 7.5.1804 (Core) ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> Run playbook below with tag "a". <!--- Paste example playbooks or commands between quotes below --> ``` # cat playbook.yml --- - hosts: all remote_user: root gather_facts: yes tasks: - name: "a1" debug: msg: "a1" tags: a - name: "modul" debug: msg: "modul" tags: modul - name: "modul - endhost" meta: end_host tags: modul - name: "a2" debug: msg: "a2" tags: a - name: "modul2" debug: msg: "modul2" tags: modul ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> Task "a2" should be also executed, because meta:end_host is under different tag. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> No matter what meta task i use, it always execute even if its action is specified within different tag. <!--- Paste verbatim command output between quotes --> ``` # ansible-playbook playbook.yml -t a -vvv PLAYBOOK: playbook.yml ********************************************************************************************** 1 plays in playbook.yml PLAY [all] ********************************************************************************************************** META: ran handlers TASK [a1] *********************************************************************************************************** task path: /root/3-ansible-modul/playbook.yml:7 ok: [vm1_main] => { "msg": "a1" } ok: [vm2] => { "msg": "a1" } ok: [vm3] => { "msg": "a1" } META: ending play for vm1_main META: ending play for vm2 META: ending play for vm3 PLAY RECAP ********************************************************************************************************** vm1_main : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 vm2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 vm3 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ```
https://github.com/ansible/ansible/issues/64558
https://github.com/ansible/ansible/pull/67508
59b80b9146765382f7fbbeefe401fe33b0df033b
1425e3597be3c186ae50a7e488bc1f61c85a274f
2019-11-07T12:20:24Z
python
2020-08-21T15:08:49Z
lib/ansible/playbook/block.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type from ansible.errors import AnsibleParserError from ansible.playbook.attribute import FieldAttribute from ansible.playbook.base import Base from ansible.playbook.conditional import Conditional from ansible.playbook.collectionsearch import CollectionSearch from ansible.playbook.helpers import load_list_of_tasks from ansible.playbook.role import Role from ansible.playbook.taggable import Taggable from ansible.utils.sentinel import Sentinel class Block(Base, Conditional, CollectionSearch, Taggable): # main block fields containing the task lists _block = FieldAttribute(isa='list', default=list, inherit=False) _rescue = FieldAttribute(isa='list', default=list, inherit=False) _always = FieldAttribute(isa='list', default=list, inherit=False) # other fields _delegate_to = FieldAttribute(isa='string') _delegate_facts = FieldAttribute(isa='bool') # for future consideration? this would be functionally # similar to the 'else' clause for exceptions # _otherwise = FieldAttribute(isa='list') def __init__(self, play=None, parent_block=None, role=None, task_include=None, use_handlers=False, implicit=False): self._play = play self._role = role self._parent = None self._dep_chain = None self._use_handlers = use_handlers self._implicit = implicit # end of role flag self._eor = False if task_include: self._parent = task_include elif parent_block: self._parent = parent_block super(Block, self).__init__() def __repr__(self): return "BLOCK(uuid=%s)(id=%s)(parent=%s)" % (self._uuid, id(self), self._parent) def __eq__(self, other): '''object comparison based on _uuid''' return self._uuid == other._uuid def __ne__(self, other): '''object comparison based on _uuid''' return self._uuid != other._uuid def get_vars(self): ''' Blocks do not store variables directly, however they may be a member of a role or task include which does, so return those if present. ''' all_vars = self.vars.copy() if self._parent: all_vars.update(self._parent.get_vars()) return all_vars @staticmethod def load(data, play=None, parent_block=None, role=None, task_include=None, use_handlers=False, variable_manager=None, loader=None): implicit = not Block.is_block(data) b = Block(play=play, parent_block=parent_block, role=role, task_include=task_include, use_handlers=use_handlers, implicit=implicit) return b.load_data(data, variable_manager=variable_manager, loader=loader) @staticmethod def is_block(ds): is_block = False if isinstance(ds, dict): for attr in ('block', 'rescue', 'always'): if attr in ds: is_block = True break return is_block def preprocess_data(self, ds): ''' If a simple task is given, an implicit block for that single task is created, which goes in the main portion of the block ''' if not Block.is_block(ds): if isinstance(ds, list): return super(Block, self).preprocess_data(dict(block=ds)) else: return super(Block, self).preprocess_data(dict(block=[ds])) return super(Block, self).preprocess_data(ds) def _load_block(self, attr, ds): try: return load_list_of_tasks( ds, play=self._play, block=self, role=self._role, task_include=None, variable_manager=self._variable_manager, loader=self._loader, use_handlers=self._use_handlers, ) except AssertionError as e: raise AnsibleParserError("A malformed block was encountered while loading a block", obj=self._ds, orig_exc=e) def _load_rescue(self, attr, ds): try: return load_list_of_tasks( ds, play=self._play, block=self, role=self._role, task_include=None, variable_manager=self._variable_manager, loader=self._loader, use_handlers=self._use_handlers, ) except AssertionError as e: raise AnsibleParserError("A malformed block was encountered while loading rescue.", obj=self._ds, orig_exc=e) def _load_always(self, attr, ds): try: return load_list_of_tasks( ds, play=self._play, block=self, role=self._role, task_include=None, variable_manager=self._variable_manager, loader=self._loader, use_handlers=self._use_handlers, ) except AssertionError as e: raise AnsibleParserError("A malformed block was encountered while loading always", obj=self._ds, orig_exc=e) def _validate_always(self, attr, name, value): if value and not self.block: raise AnsibleParserError("'%s' keyword cannot be used without 'block'" % name, obj=self._ds) _validate_rescue = _validate_always def get_dep_chain(self): if self._dep_chain is None: if self._parent: return self._parent.get_dep_chain() else: return None else: return self._dep_chain[:] def copy(self, exclude_parent=False, exclude_tasks=False): def _dupe_task_list(task_list, new_block): new_task_list = [] for task in task_list: new_task = task.copy(exclude_parent=True) if task._parent: new_task._parent = task._parent.copy(exclude_tasks=True) if task._parent == new_block: # If task._parent is the same as new_block, just replace it new_task._parent = new_block else: # task may not be a direct child of new_block, search for the correct place to insert new_block cur_obj = new_task._parent while cur_obj._parent and cur_obj._parent != new_block: cur_obj = cur_obj._parent cur_obj._parent = new_block else: new_task._parent = new_block new_task_list.append(new_task) return new_task_list new_me = super(Block, self).copy() new_me._play = self._play new_me._use_handlers = self._use_handlers new_me._eor = self._eor if self._dep_chain is not None: new_me._dep_chain = self._dep_chain[:] new_me._parent = None if self._parent and not exclude_parent: new_me._parent = self._parent.copy(exclude_tasks=True) if not exclude_tasks: new_me.block = _dupe_task_list(self.block or [], new_me) new_me.rescue = _dupe_task_list(self.rescue or [], new_me) new_me.always = _dupe_task_list(self.always or [], new_me) new_me._role = None if self._role: new_me._role = self._role new_me.validate() return new_me def serialize(self): ''' Override of the default serialize method, since when we're serializing a task we don't want to include the attribute list of tasks. ''' data = dict() for attr in self._valid_attrs: if attr not in ('block', 'rescue', 'always'): data[attr] = getattr(self, attr) data['dep_chain'] = self.get_dep_chain() data['eor'] = self._eor if self._role is not None: data['role'] = self._role.serialize() if self._parent is not None: data['parent'] = self._parent.copy(exclude_tasks=True).serialize() data['parent_type'] = self._parent.__class__.__name__ return data def deserialize(self, data): ''' Override of the default deserialize method, to match the above overridden serialize method ''' # import is here to avoid import loops from ansible.playbook.task_include import TaskInclude from ansible.playbook.handler_task_include import HandlerTaskInclude # we don't want the full set of attributes (the task lists), as that # would lead to a serialize/deserialize loop for attr in self._valid_attrs: if attr in data and attr not in ('block', 'rescue', 'always'): setattr(self, attr, data.get(attr)) self._dep_chain = data.get('dep_chain', None) self._eor = data.get('eor', False) # if there was a serialized role, unpack it too role_data = data.get('role') if role_data: r = Role() r.deserialize(role_data) self._role = r parent_data = data.get('parent') if parent_data: parent_type = data.get('parent_type') if parent_type == 'Block': p = Block() elif parent_type == 'TaskInclude': p = TaskInclude() elif parent_type == 'HandlerTaskInclude': p = HandlerTaskInclude() p.deserialize(parent_data) self._parent = p self._dep_chain = self._parent.get_dep_chain() def set_loader(self, loader): self._loader = loader if self._parent: self._parent.set_loader(loader) elif self._role: self._role.set_loader(loader) dep_chain = self.get_dep_chain() if dep_chain: for dep in dep_chain: dep.set_loader(loader) def _get_parent_attribute(self, attr, extend=False, prepend=False): ''' Generic logic to get the attribute or parent attribute for a block value. ''' extend = self._valid_attrs[attr].extend prepend = self._valid_attrs[attr].prepend try: value = self._attributes[attr] # If parent is static, we can grab attrs from the parent # otherwise, defer to the grandparent if getattr(self._parent, 'statically_loaded', True): _parent = self._parent else: _parent = self._parent._parent if _parent and (value is Sentinel or extend): try: if getattr(_parent, 'statically_loaded', True): if hasattr(_parent, '_get_parent_attribute'): parent_value = _parent._get_parent_attribute(attr) else: parent_value = _parent._attributes.get(attr, Sentinel) if extend: value = self._extend_value(value, parent_value, prepend) else: value = parent_value except AttributeError: pass if self._role and (value is Sentinel or extend): try: parent_value = self._role._attributes.get(attr, Sentinel) if extend: value = self._extend_value(value, parent_value, prepend) else: value = parent_value dep_chain = self.get_dep_chain() if dep_chain and (value is Sentinel or extend): dep_chain.reverse() for dep in dep_chain: dep_value = dep._attributes.get(attr, Sentinel) if extend: value = self._extend_value(value, dep_value, prepend) else: value = dep_value if value is not Sentinel and not extend: break except AttributeError: pass if self._play and (value is Sentinel or extend): try: play_value = self._play._attributes.get(attr, Sentinel) if play_value is not Sentinel: if extend: value = self._extend_value(value, play_value, prepend) else: value = play_value except AttributeError: pass except KeyError: pass return value def filter_tagged_tasks(self, all_vars): ''' Creates a new block, with task lists filtered based on the tags. ''' def evaluate_and_append_task(target): tmp_list = [] for task in target: if isinstance(task, Block): filtered_block = evaluate_block(task) if filtered_block.has_tasks(): tmp_list.append(filtered_block) elif (task.action == 'meta' or (task.action == 'include' and task.evaluate_tags([], self._play.skip_tags, all_vars=all_vars)) or task.evaluate_tags(self._play.only_tags, self._play.skip_tags, all_vars=all_vars)): tmp_list.append(task) return tmp_list def evaluate_block(block): new_block = block.copy(exclude_parent=True, exclude_tasks=True) new_block._parent = block._parent new_block.block = evaluate_and_append_task(block.block) new_block.rescue = evaluate_and_append_task(block.rescue) new_block.always = evaluate_and_append_task(block.always) return new_block return evaluate_block(self) def has_tasks(self): return len(self.block) > 0 or len(self.rescue) > 0 or len(self.always) > 0 def get_include_params(self): if self._parent: return self._parent.get_include_params() else: return dict() def all_parents_static(self): ''' Determine if all of the parents of this block were statically loaded or not. Since Task/TaskInclude objects may be in the chain, they simply call their parents all_parents_static() method. Only Block objects in the chain check the statically_loaded value of the parent. ''' from ansible.playbook.task_include import TaskInclude if self._parent: if isinstance(self._parent, TaskInclude) and not self._parent.statically_loaded: return False return self._parent.all_parents_static() return True def get_first_parent_include(self): from ansible.playbook.task_include import TaskInclude if self._parent: if isinstance(self._parent, TaskInclude): return self._parent return self._parent.get_first_parent_include() return None
closed
ansible/ansible
https://github.com/ansible/ansible
64,558
Tags doesn't work properly with module meta
##### SUMMARY Module meta execute every time, even if the action has a different tag specified. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME meta tasks (end_host, end_play) ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible 2.8.5 config file = /root/3-ansible-modul/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ``` CACHE_PLUGIN(/root/3-ansible-modul/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/root/3-ansible-modul/ansible.cfg) = /tmp/facts_cache CACHE_PLUGIN_TIMEOUT(/root/3-ansible-modul/ansible.cfg) = 7200 DEFAULT_GATHERING(/root/3-ansible-modul/ansible.cfg) = smart DEFAULT_HOST_LIST(/root/3-ansible-modul/ansible.cfg) = [u'/root/3-ansible-modul/multinode'] ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> vm1_main (ansible host): CentOS Linux release 7.6.1810 (Core) vm2 and vm3: CentOS Linux release 7.5.1804 (Core) ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> Run playbook below with tag "a". <!--- Paste example playbooks or commands between quotes below --> ``` # cat playbook.yml --- - hosts: all remote_user: root gather_facts: yes tasks: - name: "a1" debug: msg: "a1" tags: a - name: "modul" debug: msg: "modul" tags: modul - name: "modul - endhost" meta: end_host tags: modul - name: "a2" debug: msg: "a2" tags: a - name: "modul2" debug: msg: "modul2" tags: modul ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> Task "a2" should be also executed, because meta:end_host is under different tag. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> No matter what meta task i use, it always execute even if its action is specified within different tag. <!--- Paste verbatim command output between quotes --> ``` # ansible-playbook playbook.yml -t a -vvv PLAYBOOK: playbook.yml ********************************************************************************************** 1 plays in playbook.yml PLAY [all] ********************************************************************************************************** META: ran handlers TASK [a1] *********************************************************************************************************** task path: /root/3-ansible-modul/playbook.yml:7 ok: [vm1_main] => { "msg": "a1" } ok: [vm2] => { "msg": "a1" } ok: [vm3] => { "msg": "a1" } META: ending play for vm1_main META: ending play for vm2 META: ending play for vm3 PLAY RECAP ********************************************************************************************************** vm1_main : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 vm2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 vm3 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ```
https://github.com/ansible/ansible/issues/64558
https://github.com/ansible/ansible/pull/67508
59b80b9146765382f7fbbeefe401fe33b0df033b
1425e3597be3c186ae50a7e488bc1f61c85a274f
2019-11-07T12:20:24Z
python
2020-08-21T15:08:49Z
test/integration/targets/tags/runme.sh
#!/usr/bin/env bash set -eu # Using set -x for this test causes the Shippable console to stop receiving updates and the job to time out for macOS. # Once that issue is resolved the set -x option can be added above. # Run these using en_US.UTF-8 because list-tasks is a user output function and so it tailors its output to the # user's locale. For unicode tags, this means replacing non-ascii chars with "?" COMMAND=(ansible-playbook -i ../../inventory test_tags.yml -v --list-tasks) export LC_ALL=en_US.UTF-8 # Run everything by default [ "$("${COMMAND[@]}" | grep -F Task_with | xargs)" = \ "Task_with_tag TAGS: [tag] Task_with_always_tag TAGS: [always] Task_with_unicode_tag TAGS: [くらとみ] Task_with_list_of_tags TAGS: [café, press] Task_without_tag TAGS: [] Task_with_csv_tags TAGS: [tag1, tag2] Task_with_templated_tags TAGS: [tag3]" ] # Run the exact tags, and always [ "$("${COMMAND[@]}" --tags tag | grep -F Task_with | xargs)" = \ "Task_with_tag TAGS: [tag] Task_with_always_tag TAGS: [always]" ] # Skip one tag [ "$("${COMMAND[@]}" --skip-tags tag | grep -F Task_with | xargs)" = \ "Task_with_always_tag TAGS: [always] Task_with_unicode_tag TAGS: [くらとみ] Task_with_list_of_tags TAGS: [café, press] Task_without_tag TAGS: [] Task_with_csv_tags TAGS: [tag1, tag2] Task_with_templated_tags TAGS: [tag3]" ] # Skip a unicode tag [ "$("${COMMAND[@]}" --skip-tags 'くらとみ' | grep -F Task_with | xargs)" = \ "Task_with_tag TAGS: [tag] Task_with_always_tag TAGS: [always] Task_with_list_of_tags TAGS: [café, press] Task_without_tag TAGS: [] Task_with_csv_tags TAGS: [tag1, tag2] Task_with_templated_tags TAGS: [tag3]" ] # Run just a unicode tag and always [ "$("${COMMAND[@]}" --tags 'くらとみ' | grep -F Task_with | xargs)" = \ "Task_with_always_tag TAGS: [always] Task_with_unicode_tag TAGS: [くらとみ]" ] # Run a tag from a list of tags and always [ "$("${COMMAND[@]}" --tags café | grep -F Task_with | xargs)" = \ "Task_with_always_tag TAGS: [always] Task_with_list_of_tags TAGS: [café, press]" ] # Run tag with never [ "$("${COMMAND[@]}" --tags donever | grep -F Task_with | xargs)" = \ "Task_with_always_tag TAGS: [always] Task_with_never_tag TAGS: [donever, never]" ] # Run csv tags [ "$("${COMMAND[@]}" --tags tag1 | grep -F Task_with | xargs)" = \ "Task_with_always_tag TAGS: [always] Task_with_csv_tags TAGS: [tag1, tag2]" ] # Run templated tags [ "$("${COMMAND[@]}" --tags tag3 | grep -F Task_with | xargs)" = \ "Task_with_always_tag TAGS: [always] Task_with_templated_tags TAGS: [tag3]" ] # Run tagged [ "$("${COMMAND[@]}" --tags tagged | grep -F Task_with | xargs)" = \ "Task_with_tag TAGS: [tag] Task_with_always_tag TAGS: [always] Task_with_unicode_tag TAGS: [くらとみ] Task_with_list_of_tags TAGS: [café, press] Task_with_csv_tags TAGS: [tag1, tag2] Task_with_templated_tags TAGS: [tag3]" ] # Run untagged [ "$("${COMMAND[@]}" --tags untagged | grep -F Task_with | xargs)" = \ "Task_with_always_tag TAGS: [always] Task_without_tag TAGS: []" ] # Skip 'always' [ "$("${COMMAND[@]}" --tags untagged --skip-tags always | grep -F Task_with | xargs)" = \ "Task_without_tag TAGS: []" ] # Test ansible_run_tags ansible-playbook -i ../../inventory ansible_run_tags.yml -e expect=all "$@" ansible-playbook -i ../../inventory ansible_run_tags.yml -e expect=all --tags all "$@" ansible-playbook -i ../../inventory ansible_run_tags.yml -e expect=list --tags tag1,tag3 "$@" ansible-playbook -i ../../inventory ansible_run_tags.yml -e expect=list --tags tag1 --tags tag3 "$@" ansible-playbook -i ../../inventory ansible_run_tags.yml -e expect=untagged --tags untagged "$@" ansible-playbook -i ../../inventory ansible_run_tags.yml -e expect=untagged_list --tags untagged,tag3 "$@" ansible-playbook -i ../../inventory ansible_run_tags.yml -e expect=tagged --tags tagged "$@"
closed
ansible/ansible
https://github.com/ansible/ansible
64,558
Tags doesn't work properly with module meta
##### SUMMARY Module meta execute every time, even if the action has a different tag specified. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME meta tasks (end_host, end_play) ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible 2.8.5 config file = /root/3-ansible-modul/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ``` CACHE_PLUGIN(/root/3-ansible-modul/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/root/3-ansible-modul/ansible.cfg) = /tmp/facts_cache CACHE_PLUGIN_TIMEOUT(/root/3-ansible-modul/ansible.cfg) = 7200 DEFAULT_GATHERING(/root/3-ansible-modul/ansible.cfg) = smart DEFAULT_HOST_LIST(/root/3-ansible-modul/ansible.cfg) = [u'/root/3-ansible-modul/multinode'] ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> vm1_main (ansible host): CentOS Linux release 7.6.1810 (Core) vm2 and vm3: CentOS Linux release 7.5.1804 (Core) ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> Run playbook below with tag "a". <!--- Paste example playbooks or commands between quotes below --> ``` # cat playbook.yml --- - hosts: all remote_user: root gather_facts: yes tasks: - name: "a1" debug: msg: "a1" tags: a - name: "modul" debug: msg: "modul" tags: modul - name: "modul - endhost" meta: end_host tags: modul - name: "a2" debug: msg: "a2" tags: a - name: "modul2" debug: msg: "modul2" tags: modul ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> Task "a2" should be also executed, because meta:end_host is under different tag. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> No matter what meta task i use, it always execute even if its action is specified within different tag. <!--- Paste verbatim command output between quotes --> ``` # ansible-playbook playbook.yml -t a -vvv PLAYBOOK: playbook.yml ********************************************************************************************** 1 plays in playbook.yml PLAY [all] ********************************************************************************************************** META: ran handlers TASK [a1] *********************************************************************************************************** task path: /root/3-ansible-modul/playbook.yml:7 ok: [vm1_main] => { "msg": "a1" } ok: [vm2] => { "msg": "a1" } ok: [vm3] => { "msg": "a1" } META: ending play for vm1_main META: ending play for vm2 META: ending play for vm3 PLAY RECAP ********************************************************************************************************** vm1_main : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 vm2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 vm3 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ```
https://github.com/ansible/ansible/issues/64558
https://github.com/ansible/ansible/pull/67508
59b80b9146765382f7fbbeefe401fe33b0df033b
1425e3597be3c186ae50a7e488bc1f61c85a274f
2019-11-07T12:20:24Z
python
2020-08-21T15:08:49Z
test/integration/targets/tags/test_tags.yml
--- - name: verify tags work as expected hosts: testhost gather_facts: False vars: the_tags: - tag3 tasks: - name: Task_with_tag debug: msg= tags: tag - name: Task_with_always_tag debug: msg= tags: always - name: Task_with_unicode_tag debug: msg= tags: くらとみ - name: Task_with_list_of_tags debug: msg= tags: - café - press - name: Task_without_tag debug: msg= - name: Task_with_never_tag debug: msg=NEVER tags: ['never', 'donever'] - name: Task_with_csv_tags debug: msg=csv tags: tag1,tag2 - name: Task_with_templated_tags debug: msg=templated tags: "{{ the_tags }}"
closed
ansible/ansible
https://github.com/ansible/ansible
68,402
add [galaxy] section to master ansible.cfg example
##### SUMMARY the example found here: https://github.com/ansible/ansible/blob/devel/examples/ansible.cfg you can see an example of the `[galaxy]` header here-> https://docs.ansible.com/ansible/latest/galaxy/user_guide.html#downloading-a-collection-from-automation-hub ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME https://github.com/ansible/ansible/blob/devel/examples/ansible.cfg ##### ANSIBLE VERSION N/A ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### ADDITIONAL INFORMATION Here is an example from a blog we are working on-> ``` [galaxy] server_list = automation_hub, release_galaxy [galaxy_server.automation_hub] url=https://cloud.redhat.com/api/automation-hub/ auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token token=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx [galaxy_server.release_galaxy] url=https://galaxy.ansible.com/ token=xxxxxxxxxxxxxxxxxxxxxx ```
https://github.com/ansible/ansible/issues/68402
https://github.com/ansible/ansible/pull/70931
13ab73cd89f9a300b0becf0a1d6911c57de27bc8
3f3bcbf05e46db08a0f5f88ec1eb4c72b82d9fd5
2020-03-23T12:43:50Z
python
2020-08-21T16:17:18Z
changelogs/fragments/68402_galaxy.yml
closed
ansible/ansible
https://github.com/ansible/ansible
68,402
add [galaxy] section to master ansible.cfg example
##### SUMMARY the example found here: https://github.com/ansible/ansible/blob/devel/examples/ansible.cfg you can see an example of the `[galaxy]` header here-> https://docs.ansible.com/ansible/latest/galaxy/user_guide.html#downloading-a-collection-from-automation-hub ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME https://github.com/ansible/ansible/blob/devel/examples/ansible.cfg ##### ANSIBLE VERSION N/A ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### ADDITIONAL INFORMATION Here is an example from a blog we are working on-> ``` [galaxy] server_list = automation_hub, release_galaxy [galaxy_server.automation_hub] url=https://cloud.redhat.com/api/automation-hub/ auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token token=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx [galaxy_server.release_galaxy] url=https://galaxy.ansible.com/ token=xxxxxxxxxxxxxxxxxxxxxx ```
https://github.com/ansible/ansible/issues/68402
https://github.com/ansible/ansible/pull/70931
13ab73cd89f9a300b0becf0a1d6911c57de27bc8
3f3bcbf05e46db08a0f5f88ec1eb4c72b82d9fd5
2020-03-23T12:43:50Z
python
2020-08-21T16:17:18Z
examples/ansible.cfg
# Example config file for ansible -- https://ansible.com/ # ======================================================= # Nearly all parameters can be overridden in ansible-playbook # or with command line flags. Ansible will read ANSIBLE_CONFIG, # ansible.cfg in the current working directory, .ansible.cfg in # the home directory, or /etc/ansible/ansible.cfg, whichever it # finds first # For a full list of available options, run ansible-config list or see the # documentation: https://docs.ansible.com/ansible/latest/reference_appendices/config.html. [defaults] #inventory = /etc/ansible/hosts #library = ~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules #module_utils = ~/.ansible/plugins/module_utils:/usr/share/ansible/plugins/module_utils #remote_tmp = ~/.ansible/tmp #local_tmp = ~/.ansible/tmp #forks = 5 #poll_interval = 0.001 #ask_pass = False #transport = smart # Plays will gather facts by default, which contain information about # the remote system. # # smart - gather by default, but don't regather if already gathered # implicit - gather by default, turn off with gather_facts: False # explicit - do not gather by default, must say gather_facts: True #gathering = implicit # This only affects the gathering done by a play's gather_facts directive, # by default gathering retrieves all facts subsets # all - gather all subsets # network - gather min and network facts # hardware - gather hardware facts (longest facts to retrieve) # virtual - gather min and virtual facts # facter - import facts from facter # ohai - import facts from ohai # You can combine them using comma (ex: network,virtual) # You can negate them using ! (ex: !hardware,!facter,!ohai) # A minimal set of facts is always gathered. # #gather_subset = all # some hardware related facts are collected # with a maximum timeout of 10 seconds. This # option lets you increase or decrease that # timeout to something more suitable for the # environment. # #gather_timeout = 10 # Ansible facts are available inside the ansible_facts.* dictionary # namespace. This setting maintains the behaviour which was the default prior # to 2.5, duplicating these variables into the main namespace, each with a # prefix of 'ansible_'. # This variable is set to True by default for backwards compatibility. It # will be changed to a default of 'False' in a future release. # #inject_facts_as_vars = True # Paths to search for collections, colon separated # collections_paths = ~/.ansible/collections:/usr/share/ansible/collections # Paths to search for roles, colon separated #roles_path = ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles # Host key checking is enabled by default #host_key_checking = True # You can only have one 'stdout' callback type enabled at a time. The default # is 'default'. The 'yaml' or 'debug' stdout callback plugins are easier to read. # #stdout_callback = default #stdout_callback = yaml #stdout_callback = debug # Ansible ships with some plugins that require whitelisting, # this is done to avoid running all of a type by default. # These setting lists those that you want enabled for your system. # Custom plugins should not need this unless plugin author disables them # by default. # # Enable callback plugins, they can output to stdout but cannot be 'stdout' type. #callback_whitelist = timer, mail # Determine whether includes in tasks and handlers are "static" by # default. As of 2.0, includes are dynamic by default. Setting these # values to True will make includes behave more like they did in the # 1.x versions. # #task_includes_static = False #handler_includes_static = False # Controls if a missing handler for a notification event is an error or a warning #error_on_missing_handler = True # Default timeout for connection plugins #timeout = 10 # Default user to use for playbooks if user is not specified # Uses the connection plugin's default, normally the user currently executing Ansible, # unless a different user is specified here. # #remote_user = root # Logging is off by default unless this path is defined. #log_path = /var/log/ansible.log # Default module to use when running ad-hoc commands #module_name = command # Use this shell for commands executed under sudo. # you may need to change this to /bin/bash in rare instances # if sudo is constrained. # #executable = /bin/sh # By default, variables from roles will be visible in the global variable # scope. To prevent this, set the following option to True, and only # tasks and handlers within the role will see the variables there # #private_role_vars = False # List any Jinja2 extensions to enable here. #jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n # If set, always use this private key file for authentication, same as # if passing --private-key to ansible or ansible-playbook # #private_key_file = /path/to/file # If set, configures the path to the Vault password file as an alternative to # specifying --vault-password-file on the command line. This can also be # an executable script that returns the vault password to stdout. # #vault_password_file = /path/to/vault_password_file # Format of string {{ ansible_managed }} available within Jinja2 # templates indicates to users editing templates files will be replaced. # replacing {file}, {host} and {uid} and strftime codes with proper values. # #ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host} # {file}, {host}, {uid}, and the timestamp can all interfere with idempotence # in some situations so the default is a static string: # #ansible_managed = Ansible managed # By default, ansible-playbook will display "Skipping [host]" if it determines a task # should not be run on a host. Set this to "False" if you don't want to see these "Skipping" # messages. NOTE: the task header will still be shown regardless of whether or not the # task is skipped. # #display_skipped_hosts = True # By default, if a task in a playbook does not include a name: field then # ansible-playbook will construct a header that includes the task's action but # not the task's args. This is a security feature because ansible cannot know # if the *module* considers an argument to be no_log at the time that the # header is printed. If your environment doesn't have a problem securing # stdout from ansible-playbook (or you have manually specified no_log in your # playbook on all of the tasks where you have secret information) then you can # safely set this to True to get more informative messages. # #display_args_to_stdout = False # Ansible will raise errors when attempting to dereference # Jinja2 variables that are not set in templates or action lines. Uncomment this line # to change this behavior. # #error_on_undefined_vars = False # Ansible may display warnings based on the configuration of the # system running ansible itself. This may include warnings about 3rd party packages or # other conditions that should be resolved if possible. # To disable these warnings, set the following value to False: # #system_warnings = True # Ansible may display deprecation warnings for language # features that should no longer be used and will be removed in future versions. # To disable these warnings, set the following value to False: # #deprecation_warnings = True # Ansible can optionally warn when usage of the shell and # command module appear to be simplified by using a default Ansible module # instead. These warnings can be silenced by adjusting the following # setting or adding warn=yes or warn=no to the end of the command line # parameter string. This will for example suggest using the git module # instead of shelling out to the git command. # #command_warnings = False # set plugin path directories here, separate with colons #action_plugins = /usr/share/ansible/plugins/action #become_plugins = /usr/share/ansible/plugins/become #cache_plugins = /usr/share/ansible/plugins/cache #callback_plugins = /usr/share/ansible/plugins/callback #connection_plugins = /usr/share/ansible/plugins/connection #lookup_plugins = /usr/share/ansible/plugins/lookup #inventory_plugins = /usr/share/ansible/plugins/inventory #vars_plugins = /usr/share/ansible/plugins/vars #filter_plugins = /usr/share/ansible/plugins/filter #test_plugins = /usr/share/ansible/plugins/test #terminal_plugins = /usr/share/ansible/plugins/terminal #strategy_plugins = /usr/share/ansible/plugins/strategy # Ansible will use the 'linear' strategy but you may want to try another one. #strategy = linear # By default, callbacks are not loaded for /bin/ansible. Enable this if you # want, for example, a notification or logging callback to also apply to # /bin/ansible runs # #bin_ansible_callbacks = False # Don't like cows? that's unfortunate. # set to 1 if you don't want cowsay support or export ANSIBLE_NOCOWS=1 #nocows = 1 # Set which cowsay stencil you'd like to use by default. When set to 'random', # a random stencil will be selected for each task. The selection will be filtered # against the `cow_whitelist` option below. # #cow_selection = default #cow_selection = random # When using the 'random' option for cowsay, stencils will be restricted to this list. # it should be formatted as a comma-separated list with no spaces between names. # NOTE: line continuations here are for formatting purposes only, as the INI parser # in python does not support them. # #cow_whitelist=bud-frogs,bunny,cheese,daemon,default,dragon,elephant-in-snake,elephant,eyes,\ # hellokitty,kitty,luke-koala,meow,milk,moofasa,moose,ren,sheep,small,stegosaurus,\ # stimpy,supermilker,three-eyes,turkey,turtle,tux,udder,vader-koala,vader,www # Don't like colors either? # set to 1 if you don't want colors, or export ANSIBLE_NOCOLOR=1 # #nocolor = 1 # If set to a persistent type (not 'memory', for example 'redis') fact values # from previous runs in Ansible will be stored. This may be useful when # wanting to use, for example, IP information from one group of servers # without having to talk to them in the same playbook run to get their # current IP information. # #fact_caching = memory # This option tells Ansible where to cache facts. The value is plugin dependent. # For the jsonfile plugin, it should be a path to a local directory. # For the redis plugin, the value is a host:port:database triplet: fact_caching_connection = localhost:6379:0 # #fact_caching_connection=/tmp # retry files # When a playbook fails a .retry file can be created that will be placed in ~/ # You can enable this feature by setting retry_files_enabled to True # and you can change the location of the files by setting retry_files_save_path # #retry_files_enabled = False #retry_files_save_path = ~/.ansible-retry # prevents logging of task data, off by default #no_log = False # prevents logging of tasks, but only on the targets, data is still logged on the master/controller #no_target_syslog = False # Controls whether Ansible will raise an error or warning if a task has no # choice but to create world readable temporary files to execute a module on # the remote machine. This option is False by default for security. Users may # turn this on to have behaviour more like Ansible prior to 2.1.x. See # https://docs.ansible.com/ansible/latest/user_guide/become.html#becoming-an-unprivileged-user # for more secure ways to fix this than enabling this option. # #allow_world_readable_tmpfiles = False # Controls what compression method is used for new-style ansible modules when # they are sent to the remote system. The compression types depend on having # support compiled into both the controller's python and the client's python. # The names should match with the python Zipfile compression types: # * ZIP_STORED (no compression. available everywhere) # * ZIP_DEFLATED (uses zlib, the default) # These values may be set per host via the ansible_module_compression inventory variable. # #module_compression = 'ZIP_DEFLATED' # This controls the cutoff point (in bytes) on --diff for files # set to 0 for unlimited (RAM may suffer!). # #max_diff_size = 104448 # Controls showing custom stats at the end, off by default #show_custom_stats = False # Controls which files to ignore when using a directory as inventory with # possibly multiple sources (both static and dynamic) # #inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo # This family of modules use an alternative execution path optimized for network appliances # only update this setting if you know how this works, otherwise it can break module execution # #network_group_modules=eos, nxos, ios, iosxr, junos, vyos # When enabled, this option allows lookups (via variables like {{lookup('foo')}} or when used as # a loop with `with_foo`) to return data that is not marked "unsafe". This means the data may contain # jinja2 templating language which will be run through the templating engine. # ENABLING THIS COULD BE A SECURITY RISK # #allow_unsafe_lookups = False # set default errors for all plays #any_errors_fatal = False [inventory] # List of enabled inventory plugins and the order in which they are used. #enable_plugins = host_list, script, auto, yaml, ini, toml # Ignore these extensions when parsing a directory as inventory source #ignore_extensions = .pyc, .pyo, .swp, .bak, ~, .rpm, .md, .txt, ~, .orig, .ini, .cfg, .retry # ignore files matching these patterns when parsing a directory as inventory source #ignore_patterns= # If 'True' unparsed inventory sources become fatal errors, otherwise they are warnings. #unparsed_is_failed = False [privilege_escalation] #become = False #become_method = sudo #become_ask_pass = False ## Connection Plugins ## # Settings for each connection plugin go under a section titled '[[plugin_name]_connection]' # To view available connection plugins, run ansible-doc -t connection -l # To view available options for a connection plugin, run ansible-doc -t connection [plugin_name] # https://docs.ansible.com/ansible/latest/plugins/connection.html [paramiko_connection] # uncomment this line to cause the paramiko connection plugin to not record new host # keys encountered. Increases performance on new host additions. Setting works independently of the # host key checking setting above. #record_host_keys=False # by default, Ansible requests a pseudo-terminal for commands executed under sudo. Uncomment this # line to disable this behaviour. #pty = False # paramiko will default to looking for SSH keys initially when trying to # authenticate to remote devices. This is a problem for some network devices # that close the connection after a key failure. Uncomment this line to # disable the Paramiko look for keys function #look_for_keys = False # When using persistent connections with Paramiko, the connection runs in a # background process. If the host doesn't already have a valid SSH key, by # default Ansible will prompt to add the host key. This will cause connections # running in background processes to fail. Uncomment this line to have # Paramiko automatically add host keys. #host_key_auto_add = True [ssh_connection] # ssh arguments to use # Leaving off ControlPersist will result in poor performance, so use # paramiko on older platforms rather than removing it, -C controls compression use #ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s # The base directory for the ControlPath sockets. # This is the "%(directory)s" in the control_path option # # Example: # control_path_dir = /tmp/.ansible/cp #control_path_dir = ~/.ansible/cp # The path to use for the ControlPath sockets. This defaults to a hashed string of the hostname, # port and username (empty string in the config). The hash mitigates a common problem users # found with long hostnames and the conventional %(directory)s/ansible-ssh-%%h-%%p-%%r format. # In those cases, a "too long for Unix domain socket" ssh error would occur. # # Example: # control_path = %(directory)s/%%C #control_path = # Enabling pipelining reduces the number of SSH operations required to # execute a module on the remote server. This can result in a significant # performance improvement when enabled, however when using "sudo:" you must # first disable 'requiretty' in /etc/sudoers # # By default, this option is disabled to preserve compatibility with # sudoers configurations that have requiretty (the default on many distros). # #pipelining = False # Control the mechanism for transferring files (old) # * smart = try sftp and then try scp [default] # * True = use scp only # * False = use sftp only #scp_if_ssh = smart # Control the mechanism for transferring files (new) # If set, this will override the scp_if_ssh option # * sftp = use sftp to transfer files # * scp = use scp to transfer files # * piped = use 'dd' over SSH to transfer files # * smart = try sftp, scp, and piped, in that order [default] #transfer_method = smart # If False, sftp will not use batch mode to transfer files. This may cause some # types of file transfer failures impossible to catch however, and should # only be disabled if your sftp version has problems with batch mode #sftp_batch_mode = False # The -tt argument is passed to ssh when pipelining is not enabled because sudo # requires a tty by default. #usetty = True # Number of times to retry an SSH connection to a host, in case of UNREACHABLE. # For each retry attempt, there is an exponential backoff, # so after the first attempt there is 1s wait, then 2s, 4s etc. up to 30s (max). #retries = 3 [persistent_connection] # Configures the persistent connection timeout value in seconds. This value is # how long the persistent connection will remain idle before it is destroyed. # If the connection doesn't receive a request before the timeout value # expires, the connection is shutdown. The default value is 30 seconds. #connect_timeout = 30 # The command timeout value defines the amount of time to wait for a command # or RPC call before timing out. The value for the command timeout must # be less than the value of the persistent connection idle timeout (connect_timeout) # The default value is 30 second. #command_timeout = 30 ## Become Plugins ## # Settings for become plugins go under a section named '[[plugin_name]_become_plugin]' # To view available become plugins, run ansible-doc -t become -l # To view available options for a specific plugin, run ansible-doc -t become [plugin_name] # https://docs.ansible.com/ansible/latest/plugins/become.html [sudo_become_plugin] #flags = -H -S -n #user = root [selinux] # file systems that require special treatment when dealing with security context # the default behaviour that copies the existing context or uses the user default # needs to be changed to use the file system dependent context. #special_context_filesystems=fuse,nfs,vboxsf,ramfs,9p,vfat # Set this to True to allow libvirt_lxc connections to work without SELinux. #libvirt_lxc_noseclabel = False [colors] #highlight = white #verbose = blue #warn = bright purple #error = red #debug = dark gray #deprecate = purple #skip = cyan #unreachable = red #ok = green #changed = yellow #diff_add = green #diff_remove = red #diff_lines = cyan [diff] # Always print diff when running ( same as always running with -D/--diff ) #always = False # Set how many context lines to show in diff #context = 3
closed
ansible/ansible
https://github.com/ansible/ansible
71,373
inventory plugin documentation isnt good enough
##### SUMMARY Trying to use the new "ec2 inventory" plugin with Ansible 2.9.x. I went to the "Enabling inventory plugins" page (https://docs.ansible.com/ansible/latest/plugins/inventory.html#enabling-inventory-plugins). And as instructed, added a `ansible.cfg` file to my current dir. Also added a file called `inventory_aws_ec2.yml` with the contents `plugin: aws_ec2`. When running `ansible-inventory --graph` I was expecting to at least get an error back or something related to the ec2 plugin. Based on this (https://medium.com/faun/learning-the-ansible-aws-ec2-dynamic-inventory-plugin-59dd6a929c7f) article I managed to get it working, but IMHO the documentation is far too weak here: - Should I use the `inventory` section and not the `defaults` section in `ansible.cfg`? why/why not? - The part about an "auto" plugin is extremely confusing. It looks like after having enabled the plugin I didn't have to enable the plugin after all because the auto plugin takes care of it? - is `enable_plugins` an additive setting? Do I remove other detault plugins by setting `enable_plugins = aws_ec2`? All in all, I found inventory plugins to be a surprisingly quirky thing to set up. ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME inventory plugin ##### ANSIBLE VERSION ``` ansible 2.9.12 config file = /home/trond/Documents/projects-mohawk/strange-dog/ansible.cfg configured module search path = ['/home/trond/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible executable location = /usr/local/bin/ansible python version = 3.8.2 (default, Jul 16 2020, 14:00:26) [GCC 9.3.0] ```
https://github.com/ansible/ansible/issues/71373
https://github.com/ansible/ansible/pull/71387
3f3bcbf05e46db08a0f5f88ec1eb4c72b82d9fd5
fb035da3b26476c028ae76937192739bd6cb30f7
2020-08-20T10:40:23Z
python
2020-08-21T16:26:02Z
docs/docsite/rst/plugins/inventory.rst
.. _inventory_plugins: Inventory Plugins ================= .. contents:: :local: :depth: 2 Inventory plugins allow users to point at data sources to compile the inventory of hosts that Ansible uses to target tasks, either via the ``-i /path/to/file`` and/or ``-i 'host1, host2'`` command line parameters or from other configuration sources. .. _enabling_inventory: Enabling inventory plugins -------------------------- Most inventory plugins shipped with Ansible are disabled by default and need to be whitelisted in your :ref:`ansible.cfg <ansible_configuration_settings>` file in order to function. This is how the default whitelist looks in the config file that ships with Ansible: .. code-block:: ini [inventory] enable_plugins = host_list, script, auto, yaml, ini, toml This list also establishes the order in which each plugin tries to parse an inventory source. Any plugins left out of the list will not be considered, so you can 'optimize' your inventory loading by minimizing it to what you actually use. For example: .. code-block:: ini [inventory] enable_plugins = advanced_host_list, constructed, yaml The ``auto`` inventory plugin can be used to automatically determines which inventory plugin to use for a YAML configuration file. It can also be used for inventory plugins in a collection. To whitelist specific inventory plugins in a collection you need to use the fully qualified name: .. code-block:: ini [inventory] enable_plugins = namespace.collection_name.inventory_plugin_name .. _using_inventory: Using inventory plugins ----------------------- The only requirement for using an inventory plugin after it is enabled is to provide an inventory source to parse. Ansible will try to use the list of enabled inventory plugins, in order, against each inventory source provided. Once an inventory plugin succeeds at parsing a source, any remaining inventory plugins will be skipped for that source. To start using an inventory plugin with a YAML configuration source, create a file with the accepted filename schema for the plugin in question, then add ``plugin: plugin_name``. Each plugin documents any naming restrictions. For example, the aws_ec2 inventory plugin has to end with ``aws_ec2.(yml|yaml)`` .. code-block:: yaml # demo.aws_ec2.yml plugin: aws_ec2 Or for the openstack plugin the file has to be called ``clouds.yml`` or ``openstack.(yml|yaml)``: .. code-block:: yaml # clouds.yml or openstack.(yml|yaml) plugin: openstack To use a plugin in a collection provide the fully qualified name: .. code-block:: yaml plugin: namespace.collection_name.inventory_plugin_name The ``auto`` inventory plugin is enabled by default and works by using the ``plugin`` field to indicate the plugin that should attempt to parse it. You can configure the whitelist/precedence of inventory plugins used to parse source using the `ansible.cfg` ['inventory'] ``enable_plugins`` list. After enabling the plugin and providing any required options, you can view the populated inventory with ``ansible-inventory -i demo.aws_ec2.yml --graph``: .. code-block:: text @all: |--@aws_ec2: | |--ec2-12-345-678-901.compute-1.amazonaws.com | |--ec2-98-765-432-10.compute-1.amazonaws.com |--@ungrouped: If you are using an inventory plugin in a playbook-adjacent collection and want to test your setup with ``ansible-inventory``, you will need to use the ``--playbook-dir`` flag. You can set the default inventory path (via ``inventory`` in the `ansible.cfg` [defaults] section or the :envvar:`ANSIBLE_INVENTORY` environment variable) to your inventory source(s). Now running ``ansible-inventory --graph`` should yield the same output as when you passed your YAML configuration source(s) directly. You can add custom inventory plugins to your plugin path to use in the same way. Your inventory source might be a directory of inventory configuration files. The constructed inventory plugin only operates on those hosts already in inventory, so you may want the constructed inventory configuration parsed at a particular point (such as last). Ansible parses the directory recursively, alphabetically. You cannot configure the parsing approach, so name your files to make it work predictably. Inventory plugins that extend constructed features directly can work around that restriction by adding constructed options in addition to the inventory plugin options. Otherwise, you can use ``-i`` with multiple sources to impose a specific order, e.g. ``-i demo.aws_ec2.yml -i clouds.yml -i constructed.yml``. You can create dynamic groups using host variables with the constructed ``keyed_groups`` option. The option ``groups`` can also be used to create groups and ``compose`` creates and modifies host variables. Here is an aws_ec2 example utilizing constructed features: .. code-block:: yaml # demo.aws_ec2.yml plugin: aws_ec2 regions: - us-east-1 - us-east-2 keyed_groups: # add hosts to tag_Name_value groups for each aws_ec2 host's tags.Name variable - key: tags.Name prefix: tag_Name_ separator: "" groups: # add hosts to the group development if any of the dictionary's keys or values is the word 'devel' development: "'devel' in (tags|list)" compose: # set the ansible_host variable to connect with the private IP address without changing the hostname ansible_host: private_ip_address Now the output of ``ansible-inventory -i demo.aws_ec2.yml --graph``: .. code-block:: text @all: |--@aws_ec2: | |--ec2-12-345-678-901.compute-1.amazonaws.com | |--ec2-98-765-432-10.compute-1.amazonaws.com | |--... |--@development: | |--ec2-12-345-678-901.compute-1.amazonaws.com | |--ec2-98-765-432-10.compute-1.amazonaws.com |--@tag_Name_ECS_Instance: | |--ec2-98-765-432-10.compute-1.amazonaws.com |--@tag_Name_Test_Server: | |--ec2-12-345-678-901.compute-1.amazonaws.com |--@ungrouped If a host does not have the variables in the configuration above (i.e. ``tags.Name``, ``tags``, ``private_ip_address``), the host will not be added to groups other than those that the inventory plugin creates and the ``ansible_host`` host variable will not be modified. If an inventory plugin supports caching, you can enable and set caching options for an individual YAML configuration source or for multiple inventory sources using environment variables or Ansible configuration files. If you enable caching for an inventory plugin without providing inventory-specific caching options, the inventory plugin will use fact-caching options. Here is an example of enabling caching for an individual YAML configuration file: .. code-block:: yaml # demo.aws_ec2.yml plugin: aws_ec2 cache: yes cache_plugin: jsonfile cache_timeout: 7200 cache_connection: /tmp/aws_inventory cache_prefix: aws_ec2 Here is an example of setting inventory caching with some fact caching defaults for the cache plugin used and the timeout in an ``ansible.cfg`` file: .. code-block:: ini [defaults] fact_caching = jsonfile fact_caching_connection = /tmp/ansible_facts cache_timeout = 3600 [inventory] cache = yes cache_connection = /tmp/ansible_inventory Besides cache plugins shipped with Ansible, cache plugins eligible for caching inventory can also reside in a custom cache plugin path or in a collection. Use FQCN if the cache plugin is in a collection. .. _inventory_plugin_list: Plugin List ----------- You can use ``ansible-doc -t inventory -l`` to see the list of available plugins. Use ``ansible-doc -t inventory <plugin name>`` to see plugin-specific documentation and examples. .. seealso:: :ref:`about_playbooks` An introduction to playbooks :ref:`callback_plugins` Ansible callback plugins :ref:`connection_plugins` Ansible connection plugins :ref:`playbooks_filters` Jinja2 filter plugins :ref:`playbooks_tests` Jinja2 test plugins :ref:`playbooks_lookups` Jinja2 lookup plugins :ref:`vars_plugins` Ansible vars plugins `User Mailing List <https://groups.google.com/group/ansible-devel>`_ Have a question? Stop by the google group! `irc.freenode.net <http://irc.freenode.net>`_ #ansible IRC chat channel
closed
ansible/ansible
https://github.com/ansible/ansible
59,795
put_file method in psrp connection plugin breaks JEA
#### SUMMARY PSRP connection plugin does not respect the JEA configuration parameters when running put_file(). It instead creates a new shell which does not support ansible_psrp_configuration_name parameter. This results in access denied errors when using modules such as win_template which rely on put_file() see: https://github.com/ansible/ansible/blob/6adf0c581ee005f514ea68e80c0f91e97e63dcea/lib/ansible/plugins/connection/psrp.py#L448 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME plugins/connection/psrp.py ##### ANSIBLE VERSION ```plain ansible 2.8.1 config file = /root/ansible-windows/ansible.cfg configured module search path = [u'/root/ansible-windows/shared/library', u'/root/ansible-windows/library'] ansible python module location = /usr/local/lib/python2.7/site-packages/ansible executable location = /usr/local/bin/ansible python version = 2.7.16 (default, Jun 11 2019, 02:02:48) [GCC 6.3.0 20170516] ``` ##### CONFIGURATION ```plain DEFAULT_CALLBACK_PLUGIN_PATH(/root/ansible-windows/ansible.cfg) = [u'/root/ansible-windows/plugins/callback'] DEFAULT_MANAGED_STR(/root/ansible-windows/ansible.cfg) = This file is managed by Ansible.%n template: {file} date: %Y-%m-%d %H:%M:%S user: {uid} host: {host} DEFAULT_MODULE_PATH(/root/ansible-windows/ansible.cfg) = [u'/root/ansible-windows/shared/library', u'/root/ansible-windows/library'] DEFAULT_ROLES_PATH(/root/ansible-windows/ansible.cfg) = [u'/root/ansible-windows/shared/roles', u'/root/ansible-windows/roles'] DEFAULT_STDOUT_CALLBACK(/root/ansible-windows/ansible.cfg) = default RETRY_FILES_ENABLED(/root/ansible-windows/ansible.cfg) = False ``` ##### OS / ENVIRONMENT Target OS: Windows Server 2016 Standard ##### STEPS TO REPRODUCE This is a bit more involved as it requires a Windows machine with JEA configured but for illustration: ```yml --- - hosts: all gather_facts: no vars: ansible_user: "{{ jea_username }}" ansible_password: "{{ jea_password }}" ansible_connection: psrp ansible_psrp_configuration_name: JEA-ConfMgmt ansible_psrp_auth: ntlm ansible_psrp_cert_validation: ignore ansible_psrp_protocol: https tasks: - win_template: src: myScript.ps1 dest: "C:/build/myScript.ps1" ``` ##### EXPECTED RESULTS win_template should be able to transfer files to a windows target machine when running with a JEA configuration ##### ACTUAL RESULTS module fails with access denied. ```plain The full traceback is: Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 144, in run res = self._execute() File "/usr/local/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 648, in _execute result = self._handler.run(task_vars=variables) File "/usr/local/lib/python2.7/site-packages/ansible/plugins/action/template.py", line 189, in run result.update(copy_action.run(task_vars=task_vars)) File "/usr/local/lib/python2.7/site-packages/ansible/plugins/action/copy.py", line 517, in run module_return = self._copy_file(source_full, source_rel, content, content_tempfile, dest, task_vars, follow) File "/usr/local/lib/python2.7/site-packages/ansible/plugins/action/copy.py", line 300, in _copy_file remote_path = self._transfer_file(source_full, tmp_src) File "/usr/local/lib/python2.7/site-packages/ansible/plugins/action/__init__.py", line 424, in _transfer_file self._connection.put_file(local_path, remote_path) File "/usr/local/lib/python2.7/site-packages/ansible/plugins/connection/psrp.py", line 452, in put_file with WinRS(self.runspace.connection, codepage=65001) as shell: File "/usr/local/lib/python2.7/site-packages/pypsrp/shell.py", line 98, in __enter__ self.open() File "/usr/local/lib/python2.7/site-packages/pypsrp/shell.py", line 204, in open option_set=options) File "/usr/local/lib/python2.7/site-packages/pypsrp/wsman.py", line 259, in create option_set, selector_set, timeout) File "/usr/local/lib/python2.7/site-packages/pypsrp/wsman.py", line 382, in invoke raise self._parse_wsman_fault(err.response_text) WSManFaultError: Received a WSManFault message. (Code: 5, Machine: cevoktaagent6.c3.zone, Reason: Access is denied.) ```
https://github.com/ansible/ansible/issues/59795
https://github.com/ansible/ansible/pull/71409
d099591964d6fedc174eb1d3fc1bbee8d2ba0f16
985ba187b22d4aac94c95013eccbd8ca3a9d490e
2019-07-30T16:08:20Z
python
2020-08-25T02:23:40Z
changelogs/fragments/psrp-copy.yaml
closed
ansible/ansible
https://github.com/ansible/ansible
59,795
put_file method in psrp connection plugin breaks JEA
#### SUMMARY PSRP connection plugin does not respect the JEA configuration parameters when running put_file(). It instead creates a new shell which does not support ansible_psrp_configuration_name parameter. This results in access denied errors when using modules such as win_template which rely on put_file() see: https://github.com/ansible/ansible/blob/6adf0c581ee005f514ea68e80c0f91e97e63dcea/lib/ansible/plugins/connection/psrp.py#L448 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME plugins/connection/psrp.py ##### ANSIBLE VERSION ```plain ansible 2.8.1 config file = /root/ansible-windows/ansible.cfg configured module search path = [u'/root/ansible-windows/shared/library', u'/root/ansible-windows/library'] ansible python module location = /usr/local/lib/python2.7/site-packages/ansible executable location = /usr/local/bin/ansible python version = 2.7.16 (default, Jun 11 2019, 02:02:48) [GCC 6.3.0 20170516] ``` ##### CONFIGURATION ```plain DEFAULT_CALLBACK_PLUGIN_PATH(/root/ansible-windows/ansible.cfg) = [u'/root/ansible-windows/plugins/callback'] DEFAULT_MANAGED_STR(/root/ansible-windows/ansible.cfg) = This file is managed by Ansible.%n template: {file} date: %Y-%m-%d %H:%M:%S user: {uid} host: {host} DEFAULT_MODULE_PATH(/root/ansible-windows/ansible.cfg) = [u'/root/ansible-windows/shared/library', u'/root/ansible-windows/library'] DEFAULT_ROLES_PATH(/root/ansible-windows/ansible.cfg) = [u'/root/ansible-windows/shared/roles', u'/root/ansible-windows/roles'] DEFAULT_STDOUT_CALLBACK(/root/ansible-windows/ansible.cfg) = default RETRY_FILES_ENABLED(/root/ansible-windows/ansible.cfg) = False ``` ##### OS / ENVIRONMENT Target OS: Windows Server 2016 Standard ##### STEPS TO REPRODUCE This is a bit more involved as it requires a Windows machine with JEA configured but for illustration: ```yml --- - hosts: all gather_facts: no vars: ansible_user: "{{ jea_username }}" ansible_password: "{{ jea_password }}" ansible_connection: psrp ansible_psrp_configuration_name: JEA-ConfMgmt ansible_psrp_auth: ntlm ansible_psrp_cert_validation: ignore ansible_psrp_protocol: https tasks: - win_template: src: myScript.ps1 dest: "C:/build/myScript.ps1" ``` ##### EXPECTED RESULTS win_template should be able to transfer files to a windows target machine when running with a JEA configuration ##### ACTUAL RESULTS module fails with access denied. ```plain The full traceback is: Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 144, in run res = self._execute() File "/usr/local/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 648, in _execute result = self._handler.run(task_vars=variables) File "/usr/local/lib/python2.7/site-packages/ansible/plugins/action/template.py", line 189, in run result.update(copy_action.run(task_vars=task_vars)) File "/usr/local/lib/python2.7/site-packages/ansible/plugins/action/copy.py", line 517, in run module_return = self._copy_file(source_full, source_rel, content, content_tempfile, dest, task_vars, follow) File "/usr/local/lib/python2.7/site-packages/ansible/plugins/action/copy.py", line 300, in _copy_file remote_path = self._transfer_file(source_full, tmp_src) File "/usr/local/lib/python2.7/site-packages/ansible/plugins/action/__init__.py", line 424, in _transfer_file self._connection.put_file(local_path, remote_path) File "/usr/local/lib/python2.7/site-packages/ansible/plugins/connection/psrp.py", line 452, in put_file with WinRS(self.runspace.connection, codepage=65001) as shell: File "/usr/local/lib/python2.7/site-packages/pypsrp/shell.py", line 98, in __enter__ self.open() File "/usr/local/lib/python2.7/site-packages/pypsrp/shell.py", line 204, in open option_set=options) File "/usr/local/lib/python2.7/site-packages/pypsrp/wsman.py", line 259, in create option_set, selector_set, timeout) File "/usr/local/lib/python2.7/site-packages/pypsrp/wsman.py", line 382, in invoke raise self._parse_wsman_fault(err.response_text) WSManFaultError: Received a WSManFault message. (Code: 5, Machine: cevoktaagent6.c3.zone, Reason: Access is denied.) ```
https://github.com/ansible/ansible/issues/59795
https://github.com/ansible/ansible/pull/71409
d099591964d6fedc174eb1d3fc1bbee8d2ba0f16
985ba187b22d4aac94c95013eccbd8ca3a9d490e
2019-07-30T16:08:20Z
python
2020-08-25T02:23:40Z
lib/ansible/plugins/connection/psrp.py
# Copyright (c) 2018 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type DOCUMENTATION = """ author: Ansible Core Team connection: psrp short_description: Run tasks over Microsoft PowerShell Remoting Protocol description: - Run commands or put/fetch on a target via PSRP (WinRM plugin) - This is similar to the I(winrm) connection plugin which uses the same underlying transport but instead runs in a PowerShell interpreter. version_added: "2.7" requirements: - pypsrp (Python library) options: # transport options remote_addr: description: - The hostname or IP address of the remote host. default: inventory_hostname type: str vars: - name: ansible_host - name: ansible_psrp_host remote_user: description: - The user to log in as. type: str vars: - name: ansible_user - name: ansible_psrp_user remote_password: description: Authentication password for the C(remote_user). Can be supplied as CLI option. type: str vars: - name: ansible_password - name: ansible_winrm_pass - name: ansible_winrm_password aliases: - password # Needed for --ask-pass to come through on delegation port: description: - The port for PSRP to connect on the remote target. - Default is C(5986) if I(protocol) is not defined or is C(https), otherwise the port is C(5985). type: int vars: - name: ansible_port - name: ansible_psrp_port protocol: description: - Set the protocol to use for the connection. - Default is C(https) if I(port) is not defined or I(port) is not C(5985). choices: - http - https type: str vars: - name: ansible_psrp_protocol path: description: - The URI path to connect to. type: str vars: - name: ansible_psrp_path default: 'wsman' auth: description: - The authentication protocol to use when authenticating the remote user. - The default, C(negotiate), will attempt to use C(Kerberos) if it is available and fall back to C(NTLM) if it isn't. type: str vars: - name: ansible_psrp_auth choices: - basic - certificate - negotiate - kerberos - ntlm - credssp default: negotiate cert_validation: description: - Whether to validate the remote server's certificate or not. - Set to C(ignore) to not validate any certificates. - I(ca_cert) can be set to the path of a PEM certificate chain to use in the validation. choices: - validate - ignore default: validate type: str vars: - name: ansible_psrp_cert_validation ca_cert: description: - The path to a PEM certificate chain to use when validating the server's certificate. - This value is ignored if I(cert_validation) is set to C(ignore). type: path vars: - name: ansible_psrp_cert_trust_path - name: ansible_psrp_ca_cert aliases: [ cert_trust_path ] connection_timeout: description: - The connection timeout for making the request to the remote host. - This is measured in seconds. type: int vars: - name: ansible_psrp_connection_timeout default: 30 read_timeout: description: - The read timeout for receiving data from the remote host. - This value must always be greater than I(operation_timeout). - This option requires pypsrp >= 0.3. - This is measured in seconds. type: int vars: - name: ansible_psrp_read_timeout default: 30 version_added: '2.8' reconnection_retries: description: - The number of retries on connection errors. type: int vars: - name: ansible_psrp_reconnection_retries default: 0 version_added: '2.8' reconnection_backoff: description: - The backoff time to use in between reconnection attempts. (First sleeps X, then sleeps 2*X, then sleeps 4*X, ...) - This is measured in seconds. - The C(ansible_psrp_reconnection_backoff) variable was added in Ansible 2.9. type: int vars: - name: ansible_psrp_connection_backoff - name: ansible_psrp_reconnection_backoff default: 2 version_added: '2.8' message_encryption: description: - Controls the message encryption settings, this is different from TLS encryption when I(ansible_psrp_protocol) is C(https). - Only the auth protocols C(negotiate), C(kerberos), C(ntlm), and C(credssp) can do message encryption. The other authentication protocols only support encryption when C(protocol) is set to C(https). - C(auto) means means message encryption is only used when not using TLS/HTTPS. - C(always) is the same as C(auto) but message encryption is always used even when running over TLS/HTTPS. - C(never) disables any encryption checks that are in place when running over HTTP and disables any authentication encryption processes. type: str vars: - name: ansible_psrp_message_encryption choices: - auto - always - never default: auto proxy: description: - Set the proxy URL to use when connecting to the remote host. vars: - name: ansible_psrp_proxy type: str ignore_proxy: description: - Will disable any environment proxy settings and connect directly to the remote host. - This option is ignored if C(proxy) is set. vars: - name: ansible_psrp_ignore_proxy type: bool default: 'no' # auth options certificate_key_pem: description: - The local path to an X509 certificate key to use with certificate auth. type: path vars: - name: ansible_psrp_certificate_key_pem certificate_pem: description: - The local path to an X509 certificate to use with certificate auth. type: path vars: - name: ansible_psrp_certificate_pem credssp_auth_mechanism: description: - The sub authentication mechanism to use with CredSSP auth. - When C(auto), both Kerberos and NTLM is attempted with kerberos being preferred. type: str choices: - auto - kerberos - ntlm default: auto vars: - name: ansible_psrp_credssp_auth_mechanism credssp_disable_tlsv1_2: description: - Disables the use of TLSv1.2 on the CredSSP authentication channel. - This should not be set to C(yes) unless dealing with a host that does not have TLSv1.2. default: no type: bool vars: - name: ansible_psrp_credssp_disable_tlsv1_2 credssp_minimum_version: description: - The minimum CredSSP server authentication version that will be accepted. - Set to C(5) to ensure the server has been patched and is not vulnerable to CVE 2018-0886. default: 2 type: int vars: - name: ansible_psrp_credssp_minimum_version negotiate_delegate: description: - Allow the remote user the ability to delegate it's credentials to another server, i.e. credential delegation. - Only valid when Kerberos was the negotiated auth or was explicitly set as the authentication. - Ignored when NTLM was the negotiated auth. type: bool vars: - name: ansible_psrp_negotiate_delegate negotiate_hostname_override: description: - Override the remote hostname when searching for the host in the Kerberos lookup. - This allows Ansible to connect over IP but authenticate with the remote server using it's DNS name. - Only valid when Kerberos was the negotiated auth or was explicitly set as the authentication. - Ignored when NTLM was the negotiated auth. type: str vars: - name: ansible_psrp_negotiate_hostname_override negotiate_send_cbt: description: - Send the Channel Binding Token (CBT) structure when authenticating. - CBT is used to provide extra protection against Man in the Middle C(MitM) attacks by binding the outer transport channel to the auth channel. - CBT is not used when using just C(HTTP), only C(HTTPS). default: yes type: bool vars: - name: ansible_psrp_negotiate_send_cbt negotiate_service: description: - Override the service part of the SPN used during Kerberos authentication. - Only valid when Kerberos was the negotiated auth or was explicitly set as the authentication. - Ignored when NTLM was the negotiated auth. default: WSMAN type: str vars: - name: ansible_psrp_negotiate_service # protocol options operation_timeout: description: - Sets the WSMan timeout for each operation. - This is measured in seconds. - This should not exceed the value for C(connection_timeout). type: int vars: - name: ansible_psrp_operation_timeout default: 20 max_envelope_size: description: - Sets the maximum size of each WSMan message sent to the remote host. - This is measured in bytes. - Defaults to C(150KiB) for compatibility with older hosts. type: int vars: - name: ansible_psrp_max_envelope_size default: 153600 configuration_name: description: - The name of the PowerShell configuration endpoint to connect to. type: str vars: - name: ansible_psrp_configuration_name default: Microsoft.PowerShell """ import base64 import json import logging import os from ansible import constants as C from ansible.errors import AnsibleConnectionFailure, AnsibleError from ansible.errors import AnsibleFileNotFound from ansible.module_utils.parsing.convert_bool import boolean from ansible.module_utils._text import to_bytes, to_native, to_text from ansible.plugins.connection import ConnectionBase from ansible.plugins.shell.powershell import _common_args from ansible.utils.display import Display from ansible.utils.hashing import secure_hash HAS_PYPSRP = True PYPSRP_IMP_ERR = None try: import pypsrp from pypsrp.complex_objects import GenericComplexObject, PSInvocationState, RunspacePoolState from pypsrp.exceptions import AuthenticationError, WinRMError from pypsrp.host import PSHost, PSHostUserInterface from pypsrp.powershell import PowerShell, RunspacePool from pypsrp.shell import Process, SignalCode, WinRS from pypsrp.wsman import WSMan, AUTH_KWARGS from requests.exceptions import ConnectionError, ConnectTimeout except ImportError as err: HAS_PYPSRP = False PYPSRP_IMP_ERR = err display = Display() class Connection(ConnectionBase): transport = 'psrp' module_implementation_preferences = ('.ps1', '.exe', '') allow_executable = False has_pipelining = True allow_extras = True def __init__(self, *args, **kwargs): self.always_pipeline_modules = True self.has_native_async = True self.runspace = None self.host = None self._shell_type = 'powershell' super(Connection, self).__init__(*args, **kwargs) if not C.DEFAULT_DEBUG: logging.getLogger('pypsrp').setLevel(logging.WARNING) logging.getLogger('requests_credssp').setLevel(logging.INFO) logging.getLogger('urllib3').setLevel(logging.INFO) def _connect(self): if not HAS_PYPSRP: raise AnsibleError("pypsrp or dependencies are not installed: %s" % to_native(PYPSRP_IMP_ERR)) super(Connection, self)._connect() self._build_kwargs() display.vvv("ESTABLISH PSRP CONNECTION FOR USER: %s ON PORT %s TO %s" % (self._psrp_user, self._psrp_port, self._psrp_host), host=self._psrp_host) if not self.runspace: connection = WSMan(**self._psrp_conn_kwargs) # create our psuedo host to capture the exit code and host output host_ui = PSHostUserInterface() self.host = PSHost(None, None, False, "Ansible PSRP Host", None, host_ui, None) self.runspace = RunspacePool( connection, host=self.host, configuration_name=self._psrp_configuration_name ) display.vvvvv( "PSRP OPEN RUNSPACE: auth=%s configuration=%s endpoint=%s" % (self._psrp_auth, self._psrp_configuration_name, connection.transport.endpoint), host=self._psrp_host ) try: self.runspace.open() except AuthenticationError as e: raise AnsibleConnectionFailure("failed to authenticate with " "the server: %s" % to_native(e)) except WinRMError as e: raise AnsibleConnectionFailure( "psrp connection failure during runspace open: %s" % to_native(e) ) except (ConnectionError, ConnectTimeout) as e: raise AnsibleConnectionFailure( "Failed to connect to the host via PSRP: %s" % to_native(e) ) self._connected = True return self def reset(self): display.vvvvv("PSRP: Reset Connection", host=self._psrp_host) self.runspace = None self._connect() def exec_command(self, cmd, in_data=None, sudoable=True): super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable) if cmd.startswith(" ".join(_common_args) + " -EncodedCommand"): # This is a PowerShell script encoded by the shell plugin, we will # decode the script and execute it in the runspace instead of # starting a new interpreter to save on time b_command = base64.b64decode(cmd.split(" ")[-1]) script = to_text(b_command, 'utf-16-le') in_data = to_text(in_data, errors="surrogate_or_strict", nonstring="passthru") if in_data and in_data.startswith(u"#!"): # ANSIBALLZ wrapper, we need to get the interpreter and execute # that as the script - note this won't work as basic.py relies # on packages not available on Windows, once fixed we can enable # this path interpreter = to_native(in_data.splitlines()[0][2:]) # script = "$input | &'%s' -" % interpreter # in_data = to_text(in_data) raise AnsibleError("cannot run the interpreter '%s' on the psrp " "connection plugin" % interpreter) # call build_module_command to get the bootstrap wrapper text bootstrap_wrapper = self._shell.build_module_command('', '', '') if bootstrap_wrapper == cmd: # Do not display to the user each invocation of the bootstrap wrapper display.vvv("PSRP: EXEC (via pipeline wrapper)") else: display.vvv("PSRP: EXEC %s" % script, host=self._psrp_host) else: # In other cases we want to execute the cmd as the script. We add on the 'exit $LASTEXITCODE' to ensure the # rc is propagated back to the connection plugin. script = to_text(u"%s\nexit $LASTEXITCODE" % cmd) display.vvv(u"PSRP: EXEC %s" % script, host=self._psrp_host) rc, stdout, stderr = self._exec_psrp_script(script, in_data) return rc, stdout, stderr def put_file(self, in_path, out_path): super(Connection, self).put_file(in_path, out_path) display.vvv("PUT %s TO %s" % (in_path, out_path), host=self._psrp_host) out_path = self._shell._unquote(out_path) script = u'''begin { $ErrorActionPreference = "Stop" $path = '%s' $fd = [System.IO.File]::Create($path) $algo = [System.Security.Cryptography.SHA1CryptoServiceProvider]::Create() $bytes = @() } process { $bytes = [System.Convert]::FromBase64String($input) $algo.TransformBlock($bytes, 0, $bytes.Length, $bytes, 0) > $null $fd.Write($bytes, 0, $bytes.Length) } end { $fd.Close() $algo.TransformFinalBlock($bytes, 0, 0) > $null $hash = [System.BitConverter]::ToString($algo.Hash) $hash = $hash.Replace("-", "").ToLowerInvariant() Write-Output -InputObject "{`"sha1`":`"$hash`"}" }''' % self._shell._escape(out_path) cmd_parts = self._shell._encode_script(script, as_list=True, strict_mode=False, preserve_rc=False) b_in_path = to_bytes(in_path, errors='surrogate_or_strict') if not os.path.exists(b_in_path): raise AnsibleFileNotFound('file or module does not exist: "%s"' % to_native(in_path)) in_size = os.path.getsize(b_in_path) buffer_size = int(self.runspace.connection.max_payload_size / 4 * 3) # copying files is faster when using the raw WinRM shell and not PSRP # we will create a WinRS shell just for this process # TODO: speed this up as there is overhead creating a shell for this with WinRS(self.runspace.connection, codepage=65001) as shell: process = Process(shell, cmd_parts[0], cmd_parts[1:]) process.begin_invoke() offset = 0 with open(b_in_path, 'rb') as src_file: for data in iter((lambda: src_file.read(buffer_size)), b""): offset += len(data) display.vvvvv("PSRP PUT %s to %s (offset=%d, size=%d" % (in_path, out_path, offset, len(data)), host=self._psrp_host) b64_data = base64.b64encode(data) + b"\r\n" process.send(b64_data, end=(src_file.tell() == in_size)) # the file was empty, return empty buffer if offset == 0: process.send(b"", end=True) process.end_invoke() process.signal(SignalCode.CTRL_C) if process.rc != 0: raise AnsibleError(to_native(process.stderr)) put_output = json.loads(process.stdout) remote_sha1 = put_output.get("sha1") if not remote_sha1: raise AnsibleError("Remote sha1 was not returned, stdout: '%s', " "stderr: '%s'" % (to_native(process.stdout), to_native(process.stderr))) local_sha1 = secure_hash(in_path) if not remote_sha1 == local_sha1: raise AnsibleError("Remote sha1 hash %s does not match local hash " "%s" % (to_native(remote_sha1), to_native(local_sha1))) def fetch_file(self, in_path, out_path): super(Connection, self).fetch_file(in_path, out_path) display.vvv("FETCH %s TO %s" % (in_path, out_path), host=self._psrp_host) in_path = self._shell._unquote(in_path) out_path = out_path.replace('\\', '/') # because we are dealing with base64 data we need to get the max size # of the bytes that the base64 size would equal max_b64_size = int(self.runspace.connection.max_payload_size - (self.runspace.connection.max_payload_size / 4 * 3)) buffer_size = max_b64_size - (max_b64_size % 1024) # setup the file stream with read only mode setup_script = '''$ErrorActionPreference = "Stop" $path = '%s' if (Test-Path -Path $path -PathType Leaf) { $fs = New-Object -TypeName System.IO.FileStream -ArgumentList @( $path, [System.IO.FileMode]::Open, [System.IO.FileAccess]::Read, [System.IO.FileShare]::Read ) $buffer_size = %d } elseif (Test-Path -Path $path -PathType Container) { Write-Output -InputObject "[DIR]" } else { Write-Error -Message "$path does not exist" $host.SetShouldExit(1) }''' % (self._shell._escape(in_path), buffer_size) # read the file stream at the offset and return the b64 string read_script = '''$ErrorActionPreference = "Stop" $fs.Seek(%d, [System.IO.SeekOrigin]::Begin) > $null $buffer = New-Object -TypeName byte[] -ArgumentList $buffer_size $bytes_read = $fs.Read($buffer, 0, $buffer_size) if ($bytes_read -gt 0) { $bytes = $buffer[0..($bytes_read - 1)] Write-Output -InputObject ([System.Convert]::ToBase64String($bytes)) }''' # need to run the setup script outside of the local scope so the # file stream stays active between fetch operations rc, stdout, stderr = self._exec_psrp_script(setup_script, use_local_scope=False, force_stop=True) if rc != 0: raise AnsibleError("failed to setup file stream for fetch '%s': %s" % (out_path, to_native(stderr))) elif stdout.strip() == '[DIR]': # to be consistent with other connection plugins, we assume the caller has created the target dir return b_out_path = to_bytes(out_path, errors='surrogate_or_strict') # to be consistent with other connection plugins, we assume the caller has created the target dir offset = 0 with open(b_out_path, 'wb') as out_file: while True: display.vvvvv("PSRP FETCH %s to %s (offset=%d" % (in_path, out_path, offset), host=self._psrp_host) rc, stdout, stderr = self._exec_psrp_script(read_script % offset, force_stop=True) if rc != 0: raise AnsibleError("failed to transfer file to '%s': %s" % (out_path, to_native(stderr))) data = base64.b64decode(stdout.strip()) out_file.write(data) if len(data) < buffer_size: break offset += len(data) rc, stdout, stderr = self._exec_psrp_script("$fs.Close()", force_stop=True) if rc != 0: display.warning("failed to close remote file stream of file " "'%s': %s" % (in_path, to_native(stderr))) def close(self): if self.runspace and self.runspace.state == RunspacePoolState.OPENED: display.vvvvv("PSRP CLOSE RUNSPACE: %s" % (self.runspace.id), host=self._psrp_host) self.runspace.close() self.runspace = None self._connected = False def _build_kwargs(self): self._psrp_host = self.get_option('remote_addr') self._psrp_user = self.get_option('remote_user') self._psrp_pass = self.get_option('remote_password') protocol = self.get_option('protocol') port = self.get_option('port') if protocol is None and port is None: protocol = 'https' port = 5986 elif protocol is None: protocol = 'https' if int(port) != 5985 else 'http' elif port is None: port = 5986 if protocol == 'https' else 5985 self._psrp_protocol = protocol self._psrp_port = int(port) self._psrp_path = self.get_option('path') self._psrp_auth = self.get_option('auth') # cert validation can either be a bool or a path to the cert cert_validation = self.get_option('cert_validation') cert_trust_path = self.get_option('ca_cert') if cert_validation == 'ignore': self._psrp_cert_validation = False elif cert_trust_path is not None: self._psrp_cert_validation = cert_trust_path else: self._psrp_cert_validation = True self._psrp_connection_timeout = self.get_option('connection_timeout') # Can be None self._psrp_read_timeout = self.get_option('read_timeout') # Can be None self._psrp_message_encryption = self.get_option('message_encryption') self._psrp_proxy = self.get_option('proxy') self._psrp_ignore_proxy = boolean(self.get_option('ignore_proxy')) self._psrp_operation_timeout = int(self.get_option('operation_timeout')) self._psrp_max_envelope_size = int(self.get_option('max_envelope_size')) self._psrp_configuration_name = self.get_option('configuration_name') self._psrp_reconnection_retries = int(self.get_option('reconnection_retries')) self._psrp_reconnection_backoff = float(self.get_option('reconnection_backoff')) self._psrp_certificate_key_pem = self.get_option('certificate_key_pem') self._psrp_certificate_pem = self.get_option('certificate_pem') self._psrp_credssp_auth_mechanism = self.get_option('credssp_auth_mechanism') self._psrp_credssp_disable_tlsv1_2 = self.get_option('credssp_disable_tlsv1_2') self._psrp_credssp_minimum_version = self.get_option('credssp_minimum_version') self._psrp_negotiate_send_cbt = self.get_option('negotiate_send_cbt') self._psrp_negotiate_delegate = self.get_option('negotiate_delegate') self._psrp_negotiate_hostname_override = self.get_option('negotiate_hostname_override') self._psrp_negotiate_service = self.get_option('negotiate_service') supported_args = [] for auth_kwarg in AUTH_KWARGS.values(): supported_args.extend(auth_kwarg) extra_args = set([v.replace('ansible_psrp_', '') for v in self.get_option('_extras')]) unsupported_args = extra_args.difference(supported_args) for arg in unsupported_args: display.warning("ansible_psrp_%s is unsupported by the current " "psrp version installed" % arg) self._psrp_conn_kwargs = dict( server=self._psrp_host, port=self._psrp_port, username=self._psrp_user, password=self._psrp_pass, ssl=self._psrp_protocol == 'https', path=self._psrp_path, auth=self._psrp_auth, cert_validation=self._psrp_cert_validation, connection_timeout=self._psrp_connection_timeout, encryption=self._psrp_message_encryption, proxy=self._psrp_proxy, no_proxy=self._psrp_ignore_proxy, max_envelope_size=self._psrp_max_envelope_size, operation_timeout=self._psrp_operation_timeout, certificate_key_pem=self._psrp_certificate_key_pem, certificate_pem=self._psrp_certificate_pem, credssp_auth_mechanism=self._psrp_credssp_auth_mechanism, credssp_disable_tlsv1_2=self._psrp_credssp_disable_tlsv1_2, credssp_minimum_version=self._psrp_credssp_minimum_version, negotiate_send_cbt=self._psrp_negotiate_send_cbt, negotiate_delegate=self._psrp_negotiate_delegate, negotiate_hostname_override=self._psrp_negotiate_hostname_override, negotiate_service=self._psrp_negotiate_service, ) # Check if PSRP version supports newer read_timeout argument (needs pypsrp 0.3.0+) if hasattr(pypsrp, 'FEATURES') and 'wsman_read_timeout' in pypsrp.FEATURES: self._psrp_conn_kwargs['read_timeout'] = self._psrp_read_timeout elif self._psrp_read_timeout is not None: display.warning("ansible_psrp_read_timeout is unsupported by the current psrp version installed, " "using ansible_psrp_connection_timeout value for read_timeout instead.") # Check if PSRP version supports newer reconnection_retries argument (needs pypsrp 0.3.0+) if hasattr(pypsrp, 'FEATURES') and 'wsman_reconnections' in pypsrp.FEATURES: self._psrp_conn_kwargs['reconnection_retries'] = self._psrp_reconnection_retries self._psrp_conn_kwargs['reconnection_backoff'] = self._psrp_reconnection_backoff else: if self._psrp_reconnection_retries is not None: display.warning("ansible_psrp_reconnection_retries is unsupported by the current psrp version installed.") if self._psrp_reconnection_backoff is not None: display.warning("ansible_psrp_reconnection_backoff is unsupported by the current psrp version installed.") # add in the extra args that were set for arg in extra_args.intersection(supported_args): option = self.get_option('_extras')['ansible_psrp_%s' % arg] self._psrp_conn_kwargs[arg] = option def _exec_psrp_script(self, script, input_data=None, use_local_scope=True, force_stop=False): ps = PowerShell(self.runspace) ps.add_script(script, use_local_scope=use_local_scope) ps.invoke(input=input_data) rc, stdout, stderr = self._parse_pipeline_result(ps) if force_stop: # This is usually not needed because we close the Runspace after our exec and we skip the call to close the # pipeline manually to save on some time. Set to True when running multiple exec calls in the same runspace. # Current pypsrp versions raise an exception if the current state was not RUNNING. We manually set it so we # can call stop without any issues. ps.state = PSInvocationState.RUNNING ps.stop() return rc, stdout, stderr def _parse_pipeline_result(self, pipeline): """ PSRP doesn't have the same concept as other protocols with its output. We need some extra logic to convert the pipeline streams and host output into the format that Ansible understands. :param pipeline: The finished PowerShell pipeline that invoked our commands :return: rc, stdout, stderr based on the pipeline output """ # we try and get the rc from our host implementation, this is set if # exit or $host.SetShouldExit() is called in our pipeline, if not we # set to 0 if the pipeline had not errors and 1 if it did rc = self.host.rc or (1 if pipeline.had_errors else 0) # TODO: figure out a better way of merging this with the host output stdout_list = [] for output in pipeline.output: # Not all pipeline outputs are a string or contain a __str__ value, # we will create our own output based on the properties of the # complex object if that is the case. if isinstance(output, GenericComplexObject) and output.to_string is None: obj_lines = output.property_sets for key, value in output.adapted_properties.items(): obj_lines.append(u"%s: %s" % (key, value)) for key, value in output.extended_properties.items(): obj_lines.append(u"%s: %s" % (key, value)) output_msg = u"\n".join(obj_lines) else: output_msg = to_text(output, nonstring='simplerepr') stdout_list.append(output_msg) if len(self.host.ui.stdout) > 0: stdout_list += self.host.ui.stdout stdout = u"\r\n".join(stdout_list) stderr_list = [] for error in pipeline.streams.error: # the error record is not as fully fleshed out like we usually get # in PS, we will manually create it here command_name = "%s : " % error.command_name if error.command_name else '' position = "%s\r\n" % error.invocation_position_message if error.invocation_position_message else '' error_msg = "%s%s\r\n%s" \ " + CategoryInfo : %s\r\n" \ " + FullyQualifiedErrorId : %s" \ % (command_name, str(error), position, error.message, error.fq_error) stacktrace = error.script_stacktrace if self._play_context.verbosity >= 3 and stacktrace is not None: error_msg += "\r\nStackTrace:\r\n%s" % stacktrace stderr_list.append(error_msg) if len(self.host.ui.stderr) > 0: stderr_list += self.host.ui.stderr stderr = u"\r\n".join([to_text(o) for o in stderr_list]) display.vvvvv("PSRP RC: %d" % rc, host=self._psrp_host) display.vvvvv("PSRP STDOUT: %s" % stdout, host=self._psrp_host) display.vvvvv("PSRP STDERR: %s" % stderr, host=self._psrp_host) # reset the host back output back to defaults, needed if running # multiple pipelines on the same RunspacePool self.host.rc = 0 self.host.ui.stdout = [] self.host.ui.stderr = [] return rc, to_bytes(stdout, encoding='utf-8'), to_bytes(stderr, encoding='utf-8')
closed
ansible/ansible
https://github.com/ansible/ansible
69,550
SSH Windows - Fails to decode stderr when pipelining is disabled
##### SUMMARY When using SSH against a Windows host with PowerShell v5 the stderr when pipelining is not being used contains a nested CLIXML message in the stderr. We should be able to handle this correctly and not fail. Currently I can only explicitly disable pipelining for Windows as it should always be enabled but there is a report where this has happened in the wild so it's not unheard of https://github.com/ansible/awx/issues/6990. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ssh powershell ##### ANSIBLE VERSION ```paste below devel ``` ##### CONFIGURATION Not sure yet, need to figure out how pipelining was disabled on the Windows case. ##### OS / ENVIRONMENT Targeting Windows Server with PowerShell v5 and the SSH default shell is `powershell.exe` ##### STEPS TO REPRODUCE Not sure yet, can only replicate by connecting to SSH on a WIndows host and manually changing https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/connection/ssh.py#L464 to False. ##### EXPECTED RESULTS Works just fine ##### ACTUAL RESULTS Fails because the stderr looks like (see nested CLIXML header) ```paste below b'#< CLIXML\r\n#< CLIXML\r\n<Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs><Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj><Obj S="progress" RefId="1"><TNRef RefId="0" /><MS><I64 N="SourceId">2</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>') ```
https://github.com/ansible/ansible/issues/69550
https://github.com/ansible/ansible/pull/71412
3c8744f0c157b867cb5808b3a9efae3f22f26735
8897d7e2ff8fa37c25cd4ba039984fd3a9e13b33
2020-05-15T20:52:54Z
python
2020-08-25T21:06:19Z
changelogs/fragments/powershell-nested-clixml.yml
closed
ansible/ansible
https://github.com/ansible/ansible
69,550
SSH Windows - Fails to decode stderr when pipelining is disabled
##### SUMMARY When using SSH against a Windows host with PowerShell v5 the stderr when pipelining is not being used contains a nested CLIXML message in the stderr. We should be able to handle this correctly and not fail. Currently I can only explicitly disable pipelining for Windows as it should always be enabled but there is a report where this has happened in the wild so it's not unheard of https://github.com/ansible/awx/issues/6990. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ssh powershell ##### ANSIBLE VERSION ```paste below devel ``` ##### CONFIGURATION Not sure yet, need to figure out how pipelining was disabled on the Windows case. ##### OS / ENVIRONMENT Targeting Windows Server with PowerShell v5 and the SSH default shell is `powershell.exe` ##### STEPS TO REPRODUCE Not sure yet, can only replicate by connecting to SSH on a WIndows host and manually changing https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/connection/ssh.py#L464 to False. ##### EXPECTED RESULTS Works just fine ##### ACTUAL RESULTS Fails because the stderr looks like (see nested CLIXML header) ```paste below b'#< CLIXML\r\n#< CLIXML\r\n<Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs><Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj><Obj S="progress" RefId="1"><TNRef RefId="0" /><MS><I64 N="SourceId">2</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>') ```
https://github.com/ansible/ansible/issues/69550
https://github.com/ansible/ansible/pull/71412
3c8744f0c157b867cb5808b3a9efae3f22f26735
8897d7e2ff8fa37c25cd4ba039984fd3a9e13b33
2020-05-15T20:52:54Z
python
2020-08-25T21:06:19Z
lib/ansible/plugins/shell/powershell.py
# Copyright (c) 2014, Chris Church <[email protected]> # Copyright (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type DOCUMENTATION = ''' name: powershell plugin_type: shell version_added: historical short_description: Windows PowerShell description: - The only option when using 'winrm' or 'psrp' as a connection plugin. - Can also be used when using 'ssh' as a connection plugin and the C(DefaultShell) has been configured to PowerShell. extends_documentation_fragment: - shell_windows ''' import base64 import os import re import shlex import pkgutil import xml.etree.ElementTree as ET import ntpath from ansible.module_utils._text import to_bytes, to_text from ansible.plugins.shell import ShellBase _common_args = ['PowerShell', '-NoProfile', '-NonInteractive', '-ExecutionPolicy', 'Unrestricted'] # Primarily for testing, allow explicitly specifying PowerShell version via # an environment variable. _powershell_version = os.environ.get('POWERSHELL_VERSION', None) if _powershell_version: _common_args = ['PowerShell', '-Version', _powershell_version] + _common_args[1:] def _parse_clixml(data, stream="Error"): """ Takes a byte string like '#< CLIXML\r\n<Objs...' and extracts the stream message encoded in the XML data. CLIXML is used by PowerShell to encode multiple objects in stderr. """ clixml = ET.fromstring(data.split(b"\r\n", 1)[-1]) namespace_match = re.match(r'{(.*)}', clixml.tag) namespace = "{%s}" % namespace_match.group(1) if namespace_match else "" strings = clixml.findall("./%sS" % namespace) lines = [e.text.replace('_x000D__x000A_', '') for e in strings if e.attrib.get('S') == stream] return to_bytes('\r\n'.join(lines)) class ShellModule(ShellBase): # Common shell filenames that this plugin handles # Powershell is handled differently. It's selected when winrm is the # connection COMPATIBLE_SHELLS = frozenset() # Family of shells this has. Must match the filename without extension SHELL_FAMILY = 'powershell' _SHELL_REDIRECT_ALLNULL = '> $null' _SHELL_AND = ';' # Used by various parts of Ansible to do Windows specific changes _IS_WINDOWS = True # TODO: add binary module support def env_prefix(self, **kwargs): # powershell/winrm env handling is handled in the exec wrapper return "" def join_path(self, *args): # use normpath() to remove doubled slashed and convert forward to backslashes parts = [ntpath.normpath(self._unquote(arg)) for arg in args] # Becuase ntpath.join treats any component that begins with a backslash as an absolute path, # we have to strip slashes from at least the beginning, otherwise join will ignore all previous # path components except for the drive. return ntpath.join(parts[0], *[part.strip('\\') for part in parts[1:]]) def get_remote_filename(self, pathname): # powershell requires that script files end with .ps1 base_name = os.path.basename(pathname.strip()) name, ext = os.path.splitext(base_name.strip()) if ext.lower() not in ['.ps1', '.exe']: return name + '.ps1' return base_name.strip() def path_has_trailing_slash(self, path): # Allow Windows paths to be specified using either slash. path = self._unquote(path) return path.endswith('/') or path.endswith('\\') def chmod(self, paths, mode): raise NotImplementedError('chmod is not implemented for Powershell') def chown(self, paths, user): raise NotImplementedError('chown is not implemented for Powershell') def set_user_facl(self, paths, user, mode): raise NotImplementedError('set_user_facl is not implemented for Powershell') def remove(self, path, recurse=False): path = self._escape(self._unquote(path)) if recurse: return self._encode_script('''Remove-Item "%s" -Force -Recurse;''' % path) else: return self._encode_script('''Remove-Item "%s" -Force;''' % path) def mkdtemp(self, basefile=None, system=False, mode=None, tmpdir=None): # Windows does not have an equivalent for the system temp files, so # the param is ignored if not basefile: basefile = self.__class__._generate_temp_dir_name() basefile = self._escape(self._unquote(basefile)) basetmpdir = tmpdir if tmpdir else self.get_option('remote_tmp') script = ''' $tmp_path = [System.Environment]::ExpandEnvironmentVariables('%s') $tmp = New-Item -Type Directory -Path $tmp_path -Name '%s' Write-Output -InputObject $tmp.FullName ''' % (basetmpdir, basefile) return self._encode_script(script.strip()) def expand_user(self, user_home_path, username=''): # PowerShell only supports "~" (not "~username"). Resolve-Path ~ does # not seem to work remotely, though by default we are always starting # in the user's home directory. user_home_path = self._unquote(user_home_path) if user_home_path == '~': script = 'Write-Output (Get-Location).Path' elif user_home_path.startswith('~\\'): script = 'Write-Output ((Get-Location).Path + "%s")' % self._escape(user_home_path[1:]) else: script = 'Write-Output "%s"' % self._escape(user_home_path) return self._encode_script(script) def exists(self, path): path = self._escape(self._unquote(path)) script = ''' If (Test-Path "%s") { $res = 0; } Else { $res = 1; } Write-Output "$res"; Exit $res; ''' % path return self._encode_script(script) def checksum(self, path, *args, **kwargs): path = self._escape(self._unquote(path)) script = ''' If (Test-Path -PathType Leaf "%(path)s") { $sp = new-object -TypeName System.Security.Cryptography.SHA1CryptoServiceProvider; $fp = [System.IO.File]::Open("%(path)s", [System.IO.Filemode]::Open, [System.IO.FileAccess]::Read); [System.BitConverter]::ToString($sp.ComputeHash($fp)).Replace("-", "").ToLower(); $fp.Dispose(); } ElseIf (Test-Path -PathType Container "%(path)s") { Write-Output "3"; } Else { Write-Output "1"; } ''' % dict(path=path) return self._encode_script(script) def build_module_command(self, env_string, shebang, cmd, arg_path=None): bootstrap_wrapper = pkgutil.get_data("ansible.executor.powershell", "bootstrap_wrapper.ps1") # pipelining bypass if cmd == '': return self._encode_script(script=bootstrap_wrapper, strict_mode=False, preserve_rc=False) # non-pipelining cmd_parts = shlex.split(cmd, posix=False) cmd_parts = list(map(to_text, cmd_parts)) if shebang and shebang.lower() == '#!powershell': if not self._unquote(cmd_parts[0]).lower().endswith('.ps1'): # we're running a module via the bootstrap wrapper cmd_parts[0] = '"%s.ps1"' % self._unquote(cmd_parts[0]) wrapper_cmd = "type " + cmd_parts[0] + " | " + self._encode_script(script=bootstrap_wrapper, strict_mode=False, preserve_rc=False) return wrapper_cmd elif shebang and shebang.startswith('#!'): cmd_parts.insert(0, shebang[2:]) elif not shebang: # The module is assumed to be a binary cmd_parts[0] = self._unquote(cmd_parts[0]) cmd_parts.append(arg_path) script = ''' Try { %s %s } Catch { $_obj = @{ failed = $true } If ($_.Exception.GetType) { $_obj.Add('msg', $_.Exception.Message) } Else { $_obj.Add('msg', $_.ToString()) } If ($_.InvocationInfo.PositionMessage) { $_obj.Add('exception', $_.InvocationInfo.PositionMessage) } ElseIf ($_.ScriptStackTrace) { $_obj.Add('exception', $_.ScriptStackTrace) } Try { $_obj.Add('error_record', ($_ | ConvertTo-Json | ConvertFrom-Json)) } Catch { } Echo $_obj | ConvertTo-Json -Compress -Depth 99 Exit 1 } ''' % (env_string, ' '.join(cmd_parts)) return self._encode_script(script, preserve_rc=False) def wrap_for_exec(self, cmd): return '& %s; exit $LASTEXITCODE' % cmd def _unquote(self, value): '''Remove any matching quotes that wrap the given value.''' value = to_text(value or '') m = re.match(r'^\s*?\'(.*?)\'\s*?$', value) if m: return m.group(1) m = re.match(r'^\s*?"(.*?)"\s*?$', value) if m: return m.group(1) return value def _escape(self, value, include_vars=False): '''Return value escaped for use in PowerShell command.''' # http://www.techotopia.com/index.php/Windows_PowerShell_1.0_String_Quoting_and_Escape_Sequences # http://stackoverflow.com/questions/764360/a-list-of-string-replacements-in-python subs = [('\n', '`n'), ('\r', '`r'), ('\t', '`t'), ('\a', '`a'), ('\b', '`b'), ('\f', '`f'), ('\v', '`v'), ('"', '`"'), ('\'', '`\''), ('`', '``'), ('\x00', '`0')] if include_vars: subs.append(('$', '`$')) pattern = '|'.join('(%s)' % re.escape(p) for p, s in subs) substs = [s for p, s in subs] def replace(m): return substs[m.lastindex - 1] return re.sub(pattern, replace, value) def _encode_script(self, script, as_list=False, strict_mode=True, preserve_rc=True): '''Convert a PowerShell script to a single base64-encoded command.''' script = to_text(script) if script == u'-': cmd_parts = _common_args + ['-Command', '-'] else: if strict_mode: script = u'Set-StrictMode -Version Latest\r\n%s' % script # try to propagate exit code if present- won't work with begin/process/end-style scripts (ala put_file) # NB: the exit code returned may be incorrect in the case of a successful command followed by an invalid command if preserve_rc: script = u'%s\r\nIf (-not $?) { If (Get-Variable LASTEXITCODE -ErrorAction SilentlyContinue) { exit $LASTEXITCODE } Else { exit 1 } }\r\n'\ % script script = '\n'.join([x.strip() for x in script.splitlines() if x.strip()]) encoded_script = to_text(base64.b64encode(script.encode('utf-16-le')), 'utf-8') cmd_parts = _common_args + ['-EncodedCommand', encoded_script] if as_list: return cmd_parts return ' '.join(cmd_parts)
closed
ansible/ansible
https://github.com/ansible/ansible
69,550
SSH Windows - Fails to decode stderr when pipelining is disabled
##### SUMMARY When using SSH against a Windows host with PowerShell v5 the stderr when pipelining is not being used contains a nested CLIXML message in the stderr. We should be able to handle this correctly and not fail. Currently I can only explicitly disable pipelining for Windows as it should always be enabled but there is a report where this has happened in the wild so it's not unheard of https://github.com/ansible/awx/issues/6990. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ssh powershell ##### ANSIBLE VERSION ```paste below devel ``` ##### CONFIGURATION Not sure yet, need to figure out how pipelining was disabled on the Windows case. ##### OS / ENVIRONMENT Targeting Windows Server with PowerShell v5 and the SSH default shell is `powershell.exe` ##### STEPS TO REPRODUCE Not sure yet, can only replicate by connecting to SSH on a WIndows host and manually changing https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/connection/ssh.py#L464 to False. ##### EXPECTED RESULTS Works just fine ##### ACTUAL RESULTS Fails because the stderr looks like (see nested CLIXML header) ```paste below b'#< CLIXML\r\n#< CLIXML\r\n<Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs><Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj><Obj S="progress" RefId="1"><TNRef RefId="0" /><MS><I64 N="SourceId">2</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>') ```
https://github.com/ansible/ansible/issues/69550
https://github.com/ansible/ansible/pull/71412
3c8744f0c157b867cb5808b3a9efae3f22f26735
8897d7e2ff8fa37c25cd4ba039984fd3a9e13b33
2020-05-15T20:52:54Z
python
2020-08-25T21:06:19Z
test/units/plugins/shell/test_powershell.py
from __future__ import (absolute_import, division, print_function) __metaclass__ = type from ansible.plugins.shell.powershell import _parse_clixml, ShellModule def test_parse_clixml_empty(): empty = b'#< CLIXML\r\n<Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"></Objs>' expected = b'' actual = _parse_clixml(empty) assert actual == expected def test_parse_clixml_with_progress(): progress = b'#< CLIXML\r\n<Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04">' \ b'<Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS>' \ b'<I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil />' \ b'<PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>' expected = b'' actual = _parse_clixml(progress) assert actual == expected def test_parse_clixml_single_stream(): single_stream = b'#< CLIXML\r\n<Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04">' \ b'<S S="Error">fake : The term \'fake\' is not recognized as the name of a cmdlet. Check _x000D__x000A_</S>' \ b'<S S="Error">the spelling of the name, or if a path was included._x000D__x000A_</S>' \ b'<S S="Error">At line:1 char:1_x000D__x000A_</S>' \ b'<S S="Error">+ fake cmdlet_x000D__x000A_</S><S S="Error">+ ~~~~_x000D__x000A_</S>' \ b'<S S="Error"> + CategoryInfo : ObjectNotFound: (fake:String) [], CommandNotFoundException_x000D__x000A_</S>' \ b'<S S="Error"> + FullyQualifiedErrorId : CommandNotFoundException_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S>' \ b'</Objs>' expected = b"fake : The term 'fake' is not recognized as the name of a cmdlet. Check \r\n" \ b"the spelling of the name, or if a path was included.\r\n" \ b"At line:1 char:1\r\n" \ b"+ fake cmdlet\r\n" \ b"+ ~~~~\r\n" \ b" + CategoryInfo : ObjectNotFound: (fake:String) [], CommandNotFoundException\r\n" \ b" + FullyQualifiedErrorId : CommandNotFoundException\r\n " actual = _parse_clixml(single_stream) assert actual == expected def test_parse_clixml_multiple_streams(): multiple_stream = b'#< CLIXML\r\n<Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04">' \ b'<S S="Error">fake : The term \'fake\' is not recognized as the name of a cmdlet. Check _x000D__x000A_</S>' \ b'<S S="Error">the spelling of the name, or if a path was included._x000D__x000A_</S>' \ b'<S S="Error">At line:1 char:1_x000D__x000A_</S>' \ b'<S S="Error">+ fake cmdlet_x000D__x000A_</S><S S="Error">+ ~~~~_x000D__x000A_</S>' \ b'<S S="Error"> + CategoryInfo : ObjectNotFound: (fake:String) [], CommandNotFoundException_x000D__x000A_</S>' \ b'<S S="Error"> + FullyQualifiedErrorId : CommandNotFoundException_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S>' \ b'<S S="Info">hi info</S>' \ b'</Objs>' expected = b"hi info" actual = _parse_clixml(multiple_stream, stream="Info") assert actual == expected def test_join_path_unc(): pwsh = ShellModule() unc_path_parts = ['\\\\host\\share\\dir1\\\\dir2\\', '\\dir3/dir4', 'dir5', 'dir6\\'] expected = '\\\\host\\share\\dir1\\dir2\\dir3\\dir4\\dir5\\dir6' actual = pwsh.join_path(*unc_path_parts) assert actual == expected
closed
ansible/ansible
https://github.com/ansible/ansible
62,781
fetch fails on Windows filenames containing dollar sign
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> `fetch` module fails on Windows with file names containing dollar signs (winrm FETCH does not quote these from PowerShell). When using `|quote` or backslash-escaping the dollars, slurp, which is executed by fetch, does handle such paths correctly and therefore fails on pre-quoted ones. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> fetch ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.7.4 config file = /etc/ansible/ansible.cfg configured module search path = ['/etc/ansible/plugins/modules', '/etc/venus/common/ansible/modules'] ansible python module location = /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.6.1 (default, Oct 24 2017, 05:44:23) [GCC 5.3.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True CACHE_PLUGIN(/etc/ansible/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/etc/ansible/ansible.cfg) = /var/lib/ansible/facts CACHE_PLUGIN_TIMEOUT(/etc/ansible/ansible.cfg) = 86400 DEFAULT_ACTION_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/action', '/etc/venus/common/ansible/action'] DEFAULT_CALLBACK_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/callback', '/etc/venus/common/ansible/callback'] DEFAULT_CALLBACK_WHITELIST(/etc/ansible/ansible.cfg) = ['log_plays', 'playbook_name', 'json'] DEFAULT_CONNECTION_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/connection', '/etc/venus/common/ansible/connection'] DEFAULT_FILTER_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/filter', '/etc/venus/common/ansible/filter'] DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 30 DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = smart DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory/venus.yml', '/etc/ansible/inventory/hosts.yml'] DEFAULT_INVENTORY_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/inventory', '/etc/venus/common/ansible/inventory'] DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log DEFAULT_LOOKUP_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/lookup', '/etc/venus/common/ansible/lookup'] DEFAULT_MODULE_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/modules', '/etc/venus/common/ansible/modules'] DEFAULT_POLL_INTERVAL(/etc/ansible/ansible.cfg) = 15 DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/roles'] DEFAULT_STDOUT_CALLBACK(/etc/ansible/ansible.cfg) = debug DEFAULT_TRANSPORT(/etc/ansible/ansible.cfg) = venus DEFAULT_VARS_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/vars', '/etc/venus/common/ansible/vars'] INVENTORY_ENABLED(/etc/ansible/ansible.cfg) = ['venus', 'yaml', 'advanced_host_list', 'host_list'] RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> * Control host: CentOS 7.6 * target: Microsoft Windows Server 2008 R2 Standard Service Pack 1 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> Create a file such as `C:\Temp\file_with_$dollar.txt` <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Get Logs hosts: all vars: log_files: - "C:/Temp/testfile_with_$dollar.txt" tasks: - name: touch test files win_file: path: "{{ item }}" state: touch with_items: "{{ log_files }}" - name: fetch log files fetch: src: "{{ item }}" dest: "{{ tmp_dir }}/{{ item | basename }}" flat: yes with_items: "{{ log_files }}" ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> File gets copied from Windows remote to localhost. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ansible-playbook 2.7.4 config file = /etc/ansible/ansible.cfg configured module search path = ['/etc/ansible/plugins/modules', '/etc/venus/common/ansible/modules'] ansible python module location = /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible-playbook python version = 3.6.1 (default, Oct 24 2017, 05:44:23) [GCC 5.3.0] Using /etc/ansible/ansible.cfg as config file setting up inventory plugins Parsed /etc/ansible/inventory/venus.yml inventory source with venus plugin /etc/ansible/inventory/hosts.yml did not meet venus requirements, check plugin documentation if this is unexpected Set default localhost to localhost Parsed /etc/ansible/inventory/hosts.yml inventory source with yaml plugin Loading callback plugin debug of type stdout, v2.0 from /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/callback/debug.py Loading callback plugin log_plays of type notification, v2.0 from /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/callback/log_plays.py PLAYBOOK: test.yml *************************************************************************************************************************************************************************** 1 plays in test.yml PLAY [testclient] **************************************************************************************************************************************************************************** META: ran handlers TASK [touch test files] ********************************************************************************************************************************************************************** task path: /home/testuser/ansible/test.yml:7 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_file.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) changed: [testclient] => (item=C:/Temp/testfile_with_$dollar.txt) => { "changed": true, "item": "C:/Temp/testfile_with_$dollar.txt" } TASK [fetch log files] *********************************************************************************************************************************************************************** task path: /home/testuser/ansible/packages/co2mo/test.yml:12 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_stat.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) <testclient> FETCH "C:\Temp\testfile_with_$dollar.txt" TO "/home/testuser/ansible/testfile_with_$dollar.txt" Traceback (most recent call last): File "/usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/connection/winrm.py", line 681, in fetch_file raise IOError(to_native(result.std_err)) OSError: #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><S S="Error">The variable '$dollar' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:2 char:32_x000D__x000A_</S><S S="Error">+ $path = "C:\Temp\testfile_with_$dollar.txt"_x000D__x000A_</S><S S="Error">+ ~~~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (dollar:String) [], RuntimeExc _x000D__x000A_</S><S S="Error"> eption_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S><S S="Error">The variable '$path' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:3 char:21_x000D__x000A_</S><S S="Error">+ If (Test-Path -Path $path -PathType Leaf)_x000D__x000A_</S><S S="Error">+ ~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (path:String) [], RuntimeExcep _x000D__x000A_</S><S S="Error"> tion_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S></Objs> fatal: [testclient]: FAILED! => {} MSG: failed to transfer file to "/home/testuser/ansible//testfile_with_$dollar.txt" PLAY RECAP *********************************************************************************************************************************************************************************** testclient : ok=1 changed=1 unreachable=0 failed=1 ``` When replacing `src: "{{ item }}"` with `src: "{{ item|quote }}"`: ``` TASK [fetch log files] *********************************************************************************************************************************************************************** task path: /home/testuser/ansible/test.yml:12 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_stat.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) <testclient> FETCH "C:\Temp\testfile_with_$dollar.txt" TO "/home/testuser/ansible/testfile_with_$dollar.txt" Traceback (most recent call last): File "/usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/connection/winrm.py", line 681, in fetch_file raise IOError(to_native(result.std_err)) OSError: #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><S S="Error">The variable '$dollar' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:2 char:32_x000D__x000A_</S><S S="Error">+ $path = "C:\Temp\testfile_with_$dollar.txt"_x000D__x000A_</S><S S="Error">+ ~~~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (dollar:String) [], RuntimeExc _x000D__x000A_</S><S S="Error"> eption_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S><S S="Error">The variable '$path' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:3 char:21_x000D__x000A_</S><S S="Error">+ If (Test-Path -Path $path -PathType Leaf)_x000D__x000A_</S><S S="Error">+ ~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (path:String) [], RuntimeExcep _x000D__x000A_</S><S S="Error"> tion_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S></Objs> fatal: [testclient]: FAILED! => {} ``` When replacing `src: "{{ item }}"` with `src: "{{ item|replace('$', '\\$') }}"`: ``` TASK [fetch log files] *********************************************************************************************************************************************************************** task path: /home/testuser/ansible/test.yml:12 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_stat.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/slurp.ps1 EXEC (via pipeline wrapper) failed: [testclient] (item=C:/Temp/testfile_with_$dollar.txt) => { "changed": false, "item": "C:/Temp/testfile_with_$dollar.txt" } MSG: Path C:\Temp\testfile_with_\$dollar.txt is not found ``` ##### WORKAROUND ```yaml - name: create temporary copy of log files win_copy: remote_src: true src: "{{ item }}" dest: "{{ item | replace('$', 'DOLLAR') }}.transfer" force: no with_items: "{{ log_files }}" - name: fetch log files fetch: src: "{{ item | replace('$', 'DOLLAR') }}.transfer" dest: "{{ tmp_dir }}/{{ item | basename }}" flat: yes with_items: "{{ log_files }}" - name: remove temporary copy of log files win_file: path: "{{ item | replace('$', 'DOLLAR') }}.transfer" state: absent with_items: "{{ log_files }}" ```
https://github.com/ansible/ansible/issues/62781
https://github.com/ansible/ansible/pull/71411
8897d7e2ff8fa37c25cd4ba039984fd3a9e13b33
72a7cb4a2c3036da5e3abb32c50713a262d0c063
2019-09-24T11:15:44Z
python
2020-08-25T21:06:51Z
changelogs/fragments/powershell-fix-quoting.yaml
closed
ansible/ansible
https://github.com/ansible/ansible
62,781
fetch fails on Windows filenames containing dollar sign
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> `fetch` module fails on Windows with file names containing dollar signs (winrm FETCH does not quote these from PowerShell). When using `|quote` or backslash-escaping the dollars, slurp, which is executed by fetch, does handle such paths correctly and therefore fails on pre-quoted ones. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> fetch ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.7.4 config file = /etc/ansible/ansible.cfg configured module search path = ['/etc/ansible/plugins/modules', '/etc/venus/common/ansible/modules'] ansible python module location = /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.6.1 (default, Oct 24 2017, 05:44:23) [GCC 5.3.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True CACHE_PLUGIN(/etc/ansible/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/etc/ansible/ansible.cfg) = /var/lib/ansible/facts CACHE_PLUGIN_TIMEOUT(/etc/ansible/ansible.cfg) = 86400 DEFAULT_ACTION_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/action', '/etc/venus/common/ansible/action'] DEFAULT_CALLBACK_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/callback', '/etc/venus/common/ansible/callback'] DEFAULT_CALLBACK_WHITELIST(/etc/ansible/ansible.cfg) = ['log_plays', 'playbook_name', 'json'] DEFAULT_CONNECTION_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/connection', '/etc/venus/common/ansible/connection'] DEFAULT_FILTER_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/filter', '/etc/venus/common/ansible/filter'] DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 30 DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = smart DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory/venus.yml', '/etc/ansible/inventory/hosts.yml'] DEFAULT_INVENTORY_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/inventory', '/etc/venus/common/ansible/inventory'] DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log DEFAULT_LOOKUP_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/lookup', '/etc/venus/common/ansible/lookup'] DEFAULT_MODULE_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/modules', '/etc/venus/common/ansible/modules'] DEFAULT_POLL_INTERVAL(/etc/ansible/ansible.cfg) = 15 DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/roles'] DEFAULT_STDOUT_CALLBACK(/etc/ansible/ansible.cfg) = debug DEFAULT_TRANSPORT(/etc/ansible/ansible.cfg) = venus DEFAULT_VARS_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/vars', '/etc/venus/common/ansible/vars'] INVENTORY_ENABLED(/etc/ansible/ansible.cfg) = ['venus', 'yaml', 'advanced_host_list', 'host_list'] RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> * Control host: CentOS 7.6 * target: Microsoft Windows Server 2008 R2 Standard Service Pack 1 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> Create a file such as `C:\Temp\file_with_$dollar.txt` <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Get Logs hosts: all vars: log_files: - "C:/Temp/testfile_with_$dollar.txt" tasks: - name: touch test files win_file: path: "{{ item }}" state: touch with_items: "{{ log_files }}" - name: fetch log files fetch: src: "{{ item }}" dest: "{{ tmp_dir }}/{{ item | basename }}" flat: yes with_items: "{{ log_files }}" ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> File gets copied from Windows remote to localhost. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ansible-playbook 2.7.4 config file = /etc/ansible/ansible.cfg configured module search path = ['/etc/ansible/plugins/modules', '/etc/venus/common/ansible/modules'] ansible python module location = /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible-playbook python version = 3.6.1 (default, Oct 24 2017, 05:44:23) [GCC 5.3.0] Using /etc/ansible/ansible.cfg as config file setting up inventory plugins Parsed /etc/ansible/inventory/venus.yml inventory source with venus plugin /etc/ansible/inventory/hosts.yml did not meet venus requirements, check plugin documentation if this is unexpected Set default localhost to localhost Parsed /etc/ansible/inventory/hosts.yml inventory source with yaml plugin Loading callback plugin debug of type stdout, v2.0 from /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/callback/debug.py Loading callback plugin log_plays of type notification, v2.0 from /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/callback/log_plays.py PLAYBOOK: test.yml *************************************************************************************************************************************************************************** 1 plays in test.yml PLAY [testclient] **************************************************************************************************************************************************************************** META: ran handlers TASK [touch test files] ********************************************************************************************************************************************************************** task path: /home/testuser/ansible/test.yml:7 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_file.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) changed: [testclient] => (item=C:/Temp/testfile_with_$dollar.txt) => { "changed": true, "item": "C:/Temp/testfile_with_$dollar.txt" } TASK [fetch log files] *********************************************************************************************************************************************************************** task path: /home/testuser/ansible/packages/co2mo/test.yml:12 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_stat.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) <testclient> FETCH "C:\Temp\testfile_with_$dollar.txt" TO "/home/testuser/ansible/testfile_with_$dollar.txt" Traceback (most recent call last): File "/usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/connection/winrm.py", line 681, in fetch_file raise IOError(to_native(result.std_err)) OSError: #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><S S="Error">The variable '$dollar' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:2 char:32_x000D__x000A_</S><S S="Error">+ $path = "C:\Temp\testfile_with_$dollar.txt"_x000D__x000A_</S><S S="Error">+ ~~~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (dollar:String) [], RuntimeExc _x000D__x000A_</S><S S="Error"> eption_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S><S S="Error">The variable '$path' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:3 char:21_x000D__x000A_</S><S S="Error">+ If (Test-Path -Path $path -PathType Leaf)_x000D__x000A_</S><S S="Error">+ ~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (path:String) [], RuntimeExcep _x000D__x000A_</S><S S="Error"> tion_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S></Objs> fatal: [testclient]: FAILED! => {} MSG: failed to transfer file to "/home/testuser/ansible//testfile_with_$dollar.txt" PLAY RECAP *********************************************************************************************************************************************************************************** testclient : ok=1 changed=1 unreachable=0 failed=1 ``` When replacing `src: "{{ item }}"` with `src: "{{ item|quote }}"`: ``` TASK [fetch log files] *********************************************************************************************************************************************************************** task path: /home/testuser/ansible/test.yml:12 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_stat.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) <testclient> FETCH "C:\Temp\testfile_with_$dollar.txt" TO "/home/testuser/ansible/testfile_with_$dollar.txt" Traceback (most recent call last): File "/usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/connection/winrm.py", line 681, in fetch_file raise IOError(to_native(result.std_err)) OSError: #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><S S="Error">The variable '$dollar' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:2 char:32_x000D__x000A_</S><S S="Error">+ $path = "C:\Temp\testfile_with_$dollar.txt"_x000D__x000A_</S><S S="Error">+ ~~~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (dollar:String) [], RuntimeExc _x000D__x000A_</S><S S="Error"> eption_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S><S S="Error">The variable '$path' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:3 char:21_x000D__x000A_</S><S S="Error">+ If (Test-Path -Path $path -PathType Leaf)_x000D__x000A_</S><S S="Error">+ ~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (path:String) [], RuntimeExcep _x000D__x000A_</S><S S="Error"> tion_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S></Objs> fatal: [testclient]: FAILED! => {} ``` When replacing `src: "{{ item }}"` with `src: "{{ item|replace('$', '\\$') }}"`: ``` TASK [fetch log files] *********************************************************************************************************************************************************************** task path: /home/testuser/ansible/test.yml:12 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_stat.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/slurp.ps1 EXEC (via pipeline wrapper) failed: [testclient] (item=C:/Temp/testfile_with_$dollar.txt) => { "changed": false, "item": "C:/Temp/testfile_with_$dollar.txt" } MSG: Path C:\Temp\testfile_with_\$dollar.txt is not found ``` ##### WORKAROUND ```yaml - name: create temporary copy of log files win_copy: remote_src: true src: "{{ item }}" dest: "{{ item | replace('$', 'DOLLAR') }}.transfer" force: no with_items: "{{ log_files }}" - name: fetch log files fetch: src: "{{ item | replace('$', 'DOLLAR') }}.transfer" dest: "{{ tmp_dir }}/{{ item | basename }}" flat: yes with_items: "{{ log_files }}" - name: remove temporary copy of log files win_file: path: "{{ item | replace('$', 'DOLLAR') }}.transfer" state: absent with_items: "{{ log_files }}" ```
https://github.com/ansible/ansible/issues/62781
https://github.com/ansible/ansible/pull/71411
8897d7e2ff8fa37c25cd4ba039984fd3a9e13b33
72a7cb4a2c3036da5e3abb32c50713a262d0c063
2019-09-24T11:15:44Z
python
2020-08-25T21:06:51Z
lib/ansible/plugins/connection/winrm.py
# (c) 2014, Chris Church <[email protected]> # Copyright (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type DOCUMENTATION = """ author: Ansible Core Team connection: winrm short_description: Run tasks over Microsoft's WinRM description: - Run commands or put/fetch on a target via WinRM - This plugin allows extra arguments to be passed that are supported by the protocol but not explicitly defined here. They should take the form of variables declared with the following pattern `ansible_winrm_<option>`. version_added: "2.0" requirements: - pywinrm (python library) options: # figure out more elegant 'delegation' remote_addr: description: - Address of the windows machine default: inventory_hostname vars: - name: ansible_host - name: ansible_winrm_host type: str remote_user: description: - The user to log in as to the Windows machine vars: - name: ansible_user - name: ansible_winrm_user type: str remote_password: description: Authentication password for the C(remote_user). Can be supplied as CLI option. vars: - name: ansible_password - name: ansible_winrm_pass - name: ansible_winrm_password type: str aliases: - password # Needed for --ask-pass to come through on delegation port: description: - port for winrm to connect on remote target - The default is the https (5986) port, if using http it should be 5985 vars: - name: ansible_port - name: ansible_winrm_port default: 5986 type: integer scheme: description: - URI scheme to use - If not set, then will default to C(https) or C(http) if I(port) is C(5985). choices: [http, https] vars: - name: ansible_winrm_scheme type: str path: description: URI path to connect to default: '/wsman' vars: - name: ansible_winrm_path type: str transport: description: - List of winrm transports to attempt to use (ssl, plaintext, kerberos, etc) - If None (the default) the plugin will try to automatically guess the correct list - The choices available depend on your version of pywinrm type: list vars: - name: ansible_winrm_transport kerberos_command: description: kerberos command to use to request a authentication ticket default: kinit vars: - name: ansible_winrm_kinit_cmd type: str kinit_args: description: - Extra arguments to pass to C(kinit) when getting the Kerberos authentication ticket. - By default no extra arguments are passed into C(kinit) unless I(ansible_winrm_kerberos_delegation) is also set. In that case C(-f) is added to the C(kinit) args so a forwardable ticket is retrieved. - If set, the args will overwrite any existing defaults for C(kinit), including C(-f) for a delegated ticket. type: str vars: - name: ansible_winrm_kinit_args version_added: '2.11' kerberos_mode: description: - kerberos usage mode. - The managed option means Ansible will obtain kerberos ticket. - While the manual one means a ticket must already have been obtained by the user. - If having issues with Ansible freezing when trying to obtain the Kerberos ticket, you can either set this to C(manual) and obtain it outside Ansible or install C(pexpect) through pip and try again. choices: [managed, manual] vars: - name: ansible_winrm_kinit_mode type: str connection_timeout: description: - Sets the operation and read timeout settings for the WinRM connection. - Corresponds to the C(operation_timeout_sec) and C(read_timeout_sec) args in pywinrm so avoid setting these vars with this one. - The default value is whatever is set in the installed version of pywinrm. vars: - name: ansible_winrm_connection_timeout type: int """ import base64 import logging import os import re import traceback import json import tempfile import shlex import subprocess HAVE_KERBEROS = False try: import kerberos HAVE_KERBEROS = True except ImportError: pass from ansible import constants as C from ansible.errors import AnsibleError, AnsibleConnectionFailure from ansible.errors import AnsibleFileNotFound from ansible.module_utils.json_utils import _filter_non_json_lines from ansible.module_utils.parsing.convert_bool import boolean from ansible.module_utils.six.moves.urllib.parse import urlunsplit from ansible.module_utils._text import to_bytes, to_native, to_text from ansible.module_utils.six import binary_type, PY3 from ansible.plugins.connection import ConnectionBase from ansible.plugins.shell.powershell import _parse_clixml from ansible.utils.hashing import secure_hash from ansible.utils.display import Display # getargspec is deprecated in favour of getfullargspec in Python 3 but # getfullargspec is not available in Python 2 if PY3: from inspect import getfullargspec as getargspec else: from inspect import getargspec try: import winrm from winrm import Response from winrm.protocol import Protocol import requests.exceptions HAS_WINRM = True except ImportError as e: HAS_WINRM = False WINRM_IMPORT_ERR = e try: import xmltodict HAS_XMLTODICT = True except ImportError as e: HAS_XMLTODICT = False XMLTODICT_IMPORT_ERR = e HAS_PEXPECT = False try: import pexpect # echo was added in pexpect 3.3+ which is newer than the RHEL package # we can only use pexpect for kerb auth if echo is a valid kwarg # https://github.com/ansible/ansible/issues/43462 if hasattr(pexpect, 'spawn'): argspec = getargspec(pexpect.spawn.__init__) if 'echo' in argspec.args: HAS_PEXPECT = True except ImportError as e: pass # used to try and parse the hostname and detect if IPv6 is being used try: import ipaddress HAS_IPADDRESS = True except ImportError: HAS_IPADDRESS = False display = Display() class Connection(ConnectionBase): '''WinRM connections over HTTP/HTTPS.''' transport = 'winrm' module_implementation_preferences = ('.ps1', '.exe', '') allow_executable = False has_pipelining = True allow_extras = True def __init__(self, *args, **kwargs): self.always_pipeline_modules = True self.has_native_async = True self.protocol = None self.shell_id = None self.delegate = None self._shell_type = 'powershell' super(Connection, self).__init__(*args, **kwargs) if not C.DEFAULT_DEBUG: logging.getLogger('requests_credssp').setLevel(logging.INFO) logging.getLogger('requests_kerberos').setLevel(logging.INFO) logging.getLogger('urllib3').setLevel(logging.INFO) def _build_winrm_kwargs(self): # this used to be in set_options, as win_reboot needs to be able to # override the conn timeout, we need to be able to build the args # after setting individual options. This is called by _connect before # starting the WinRM connection self._winrm_host = self.get_option('remote_addr') self._winrm_user = self.get_option('remote_user') self._winrm_pass = self.get_option('remote_password') self._winrm_port = self.get_option('port') self._winrm_scheme = self.get_option('scheme') # old behaviour, scheme should default to http if not set and the port # is 5985 otherwise https if self._winrm_scheme is None: self._winrm_scheme = 'http' if self._winrm_port == 5985 else 'https' self._winrm_path = self.get_option('path') self._kinit_cmd = self.get_option('kerberos_command') self._winrm_transport = self.get_option('transport') self._winrm_connection_timeout = self.get_option('connection_timeout') if hasattr(winrm, 'FEATURE_SUPPORTED_AUTHTYPES'): self._winrm_supported_authtypes = set(winrm.FEATURE_SUPPORTED_AUTHTYPES) else: # for legacy versions of pywinrm, use the values we know are supported self._winrm_supported_authtypes = set(['plaintext', 'ssl', 'kerberos']) # calculate transport if needed if self._winrm_transport is None or self._winrm_transport[0] is None: # TODO: figure out what we want to do with auto-transport selection in the face of NTLM/Kerb/CredSSP/Cert/Basic transport_selector = ['ssl'] if self._winrm_scheme == 'https' else ['plaintext'] if HAVE_KERBEROS and ((self._winrm_user and '@' in self._winrm_user)): self._winrm_transport = ['kerberos'] + transport_selector else: self._winrm_transport = transport_selector unsupported_transports = set(self._winrm_transport).difference(self._winrm_supported_authtypes) if unsupported_transports: raise AnsibleError('The installed version of WinRM does not support transport(s) %s' % to_native(list(unsupported_transports), nonstring='simplerepr')) # if kerberos is among our transports and there's a password specified, we're managing the tickets kinit_mode = self.get_option('kerberos_mode') if kinit_mode is None: # HACK: ideally, remove multi-transport stuff self._kerb_managed = "kerberos" in self._winrm_transport and (self._winrm_pass is not None and self._winrm_pass != "") elif kinit_mode == "managed": self._kerb_managed = True elif kinit_mode == "manual": self._kerb_managed = False # arg names we're going passing directly internal_kwarg_mask = set(['self', 'endpoint', 'transport', 'username', 'password', 'scheme', 'path', 'kinit_mode', 'kinit_cmd']) self._winrm_kwargs = dict(username=self._winrm_user, password=self._winrm_pass) argspec = getargspec(Protocol.__init__) supported_winrm_args = set(argspec.args) supported_winrm_args.update(internal_kwarg_mask) passed_winrm_args = set([v.replace('ansible_winrm_', '') for v in self.get_option('_extras')]) unsupported_args = passed_winrm_args.difference(supported_winrm_args) # warn for kwargs unsupported by the installed version of pywinrm for arg in unsupported_args: display.warning("ansible_winrm_{0} unsupported by pywinrm (is an up-to-date version of pywinrm installed?)".format(arg)) # pass through matching extras, excluding the list we want to treat specially for arg in passed_winrm_args.difference(internal_kwarg_mask).intersection(supported_winrm_args): self._winrm_kwargs[arg] = self.get_option('_extras')['ansible_winrm_%s' % arg] # Until pykerberos has enough goodies to implement a rudimentary kinit/klist, simplest way is to let each connection # auth itself with a private CCACHE. def _kerb_auth(self, principal, password): if password is None: password = "" self._kerb_ccache = tempfile.NamedTemporaryFile() display.vvvvv("creating Kerberos CC at %s" % self._kerb_ccache.name) krb5ccname = "FILE:%s" % self._kerb_ccache.name os.environ["KRB5CCNAME"] = krb5ccname krb5env = dict(KRB5CCNAME=krb5ccname) # Stores various flags to call with kinit, these could be explicit args set by 'ansible_winrm_kinit_args' OR # '-f' if kerberos delegation is requested (ansible_winrm_kerberos_delegation). kinit_cmdline = [self._kinit_cmd] kinit_args = self.get_option('kinit_args') if kinit_args: kinit_args = [to_text(a) for a in shlex.split(kinit_args) if a.strip()] kinit_cmdline.extend(kinit_args) elif boolean(self.get_option('_extras').get('ansible_winrm_kerberos_delegation', False)): kinit_cmdline.append('-f') kinit_cmdline.append(principal) # pexpect runs the process in its own pty so it can correctly send # the password as input even on MacOS which blocks subprocess from # doing so. Unfortunately it is not available on the built in Python # so we can only use it if someone has installed it if HAS_PEXPECT: proc_mechanism = "pexpect" command = kinit_cmdline.pop(0) password = to_text(password, encoding='utf-8', errors='surrogate_or_strict') display.vvvv("calling kinit with pexpect for principal %s" % principal) try: child = pexpect.spawn(command, kinit_cmdline, timeout=60, env=krb5env, echo=False) except pexpect.ExceptionPexpect as err: err_msg = "Kerberos auth failure when calling kinit cmd " \ "'%s': %s" % (command, to_native(err)) raise AnsibleConnectionFailure(err_msg) try: child.expect(".*:") child.sendline(password) except OSError as err: # child exited before the pass was sent, Ansible will raise # error based on the rc below, just display the error here display.vvvv("kinit with pexpect raised OSError: %s" % to_native(err)) # technically this is the stdout + stderr but to match the # subprocess error checking behaviour, we will call it stderr stderr = child.read() child.wait() rc = child.exitstatus else: proc_mechanism = "subprocess" password = to_bytes(password, encoding='utf-8', errors='surrogate_or_strict') display.vvvv("calling kinit with subprocess for principal %s" % principal) try: p = subprocess.Popen(kinit_cmdline, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=krb5env) except OSError as err: err_msg = "Kerberos auth failure when calling kinit cmd " \ "'%s': %s" % (self._kinit_cmd, to_native(err)) raise AnsibleConnectionFailure(err_msg) stdout, stderr = p.communicate(password + b'\n') rc = p.returncode != 0 if rc != 0: # one last attempt at making sure the password does not exist # in the output exp_msg = to_native(stderr.strip()) exp_msg = exp_msg.replace(to_native(password), "<redacted>") err_msg = "Kerberos auth failure for principal %s with %s: %s" \ % (principal, proc_mechanism, exp_msg) raise AnsibleConnectionFailure(err_msg) display.vvvvv("kinit succeeded for principal %s" % principal) def _winrm_connect(self): ''' Establish a WinRM connection over HTTP/HTTPS. ''' display.vvv("ESTABLISH WINRM CONNECTION FOR USER: %s on PORT %s TO %s" % (self._winrm_user, self._winrm_port, self._winrm_host), host=self._winrm_host) winrm_host = self._winrm_host if HAS_IPADDRESS: display.debug("checking if winrm_host %s is an IPv6 address" % winrm_host) try: ipaddress.IPv6Address(winrm_host) except ipaddress.AddressValueError: pass else: winrm_host = "[%s]" % winrm_host netloc = '%s:%d' % (winrm_host, self._winrm_port) endpoint = urlunsplit((self._winrm_scheme, netloc, self._winrm_path, '', '')) errors = [] for transport in self._winrm_transport: if transport == 'kerberos': if not HAVE_KERBEROS: errors.append('kerberos: the python kerberos library is not installed') continue if self._kerb_managed: self._kerb_auth(self._winrm_user, self._winrm_pass) display.vvvvv('WINRM CONNECT: transport=%s endpoint=%s' % (transport, endpoint), host=self._winrm_host) try: winrm_kwargs = self._winrm_kwargs.copy() if self._winrm_connection_timeout: winrm_kwargs['operation_timeout_sec'] = self._winrm_connection_timeout winrm_kwargs['read_timeout_sec'] = self._winrm_connection_timeout + 1 protocol = Protocol(endpoint, transport=transport, **winrm_kwargs) # open the shell from connect so we know we're able to talk to the server if not self.shell_id: self.shell_id = protocol.open_shell(codepage=65001) # UTF-8 display.vvvvv('WINRM OPEN SHELL: %s' % self.shell_id, host=self._winrm_host) return protocol except Exception as e: err_msg = to_text(e).strip() if re.search(to_text(r'Operation\s+?timed\s+?out'), err_msg, re.I): raise AnsibleError('the connection attempt timed out') m = re.search(to_text(r'Code\s+?(\d{3})'), err_msg) if m: code = int(m.groups()[0]) if code == 401: err_msg = 'the specified credentials were rejected by the server' elif code == 411: return protocol errors.append(u'%s: %s' % (transport, err_msg)) display.vvvvv(u'WINRM CONNECTION ERROR: %s\n%s' % (err_msg, to_text(traceback.format_exc())), host=self._winrm_host) if errors: raise AnsibleConnectionFailure(', '.join(map(to_native, errors))) else: raise AnsibleError('No transport found for WinRM connection') def _winrm_send_input(self, protocol, shell_id, command_id, stdin, eof=False): rq = {'env:Envelope': protocol._get_soap_header( resource_uri='http://schemas.microsoft.com/wbem/wsman/1/windows/shell/cmd', action='http://schemas.microsoft.com/wbem/wsman/1/windows/shell/Send', shell_id=shell_id)} stream = rq['env:Envelope'].setdefault('env:Body', {}).setdefault('rsp:Send', {})\ .setdefault('rsp:Stream', {}) stream['@Name'] = 'stdin' stream['@CommandId'] = command_id stream['#text'] = base64.b64encode(to_bytes(stdin)) if eof: stream['@End'] = 'true' protocol.send_message(xmltodict.unparse(rq)) def _winrm_exec(self, command, args=(), from_exec=False, stdin_iterator=None): if not self.protocol: self.protocol = self._winrm_connect() self._connected = True if from_exec: display.vvvvv("WINRM EXEC %r %r" % (command, args), host=self._winrm_host) else: display.vvvvvv("WINRM EXEC %r %r" % (command, args), host=self._winrm_host) command_id = None try: stdin_push_failed = False command_id = self.protocol.run_command(self.shell_id, to_bytes(command), map(to_bytes, args), console_mode_stdin=(stdin_iterator is None)) try: if stdin_iterator: for (data, is_last) in stdin_iterator: self._winrm_send_input(self.protocol, self.shell_id, command_id, data, eof=is_last) except Exception as ex: display.warning("ERROR DURING WINRM SEND INPUT - attempting to recover: %s %s" % (type(ex).__name__, to_text(ex))) display.debug(traceback.format_exc()) stdin_push_failed = True # NB: this can hang if the receiver is still running (eg, network failed a Send request but the server's still happy). # FUTURE: Consider adding pywinrm status check/abort operations to see if the target is still running after a failure. resptuple = self.protocol.get_command_output(self.shell_id, command_id) # ensure stdout/stderr are text for py3 # FUTURE: this should probably be done internally by pywinrm response = Response(tuple(to_text(v) if isinstance(v, binary_type) else v for v in resptuple)) # TODO: check result from response and set stdin_push_failed if we have nonzero if from_exec: display.vvvvv('WINRM RESULT %r' % to_text(response), host=self._winrm_host) else: display.vvvvvv('WINRM RESULT %r' % to_text(response), host=self._winrm_host) display.vvvvvv('WINRM STDOUT %s' % to_text(response.std_out), host=self._winrm_host) display.vvvvvv('WINRM STDERR %s' % to_text(response.std_err), host=self._winrm_host) if stdin_push_failed: # There are cases where the stdin input failed but the WinRM service still processed it. We attempt to # see if stdout contains a valid json return value so we can ignore this error try: filtered_output, dummy = _filter_non_json_lines(response.std_out) json.loads(filtered_output) except ValueError: # stdout does not contain a return response, stdin input was a fatal error stderr = to_bytes(response.std_err, encoding='utf-8') if stderr.startswith(b"#< CLIXML"): stderr = _parse_clixml(stderr) raise AnsibleError('winrm send_input failed; \nstdout: %s\nstderr %s' % (to_native(response.std_out), to_native(stderr))) return response except requests.exceptions.Timeout as exc: raise AnsibleConnectionFailure('winrm connection error: %s' % to_native(exc)) finally: if command_id: self.protocol.cleanup_command(self.shell_id, command_id) def _connect(self): if not HAS_WINRM: raise AnsibleError("winrm or requests is not installed: %s" % to_native(WINRM_IMPORT_ERR)) elif not HAS_XMLTODICT: raise AnsibleError("xmltodict is not installed: %s" % to_native(XMLTODICT_IMPORT_ERR)) super(Connection, self)._connect() if not self.protocol: self._build_winrm_kwargs() # build the kwargs from the options set self.protocol = self._winrm_connect() self._connected = True return self def reset(self): self.protocol = None self.shell_id = None self._connect() def _wrapper_payload_stream(self, payload, buffer_size=200000): payload_bytes = to_bytes(payload) byte_count = len(payload_bytes) for i in range(0, byte_count, buffer_size): yield payload_bytes[i:i + buffer_size], i + buffer_size >= byte_count def exec_command(self, cmd, in_data=None, sudoable=True): super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable) cmd_parts = self._shell._encode_script(cmd, as_list=True, strict_mode=False, preserve_rc=False) # TODO: display something meaningful here display.vvv("EXEC (via pipeline wrapper)") stdin_iterator = None if in_data: stdin_iterator = self._wrapper_payload_stream(in_data) result = self._winrm_exec(cmd_parts[0], cmd_parts[1:], from_exec=True, stdin_iterator=stdin_iterator) result.std_out = to_bytes(result.std_out) result.std_err = to_bytes(result.std_err) # parse just stderr from CLIXML output if result.std_err.startswith(b"#< CLIXML"): try: result.std_err = _parse_clixml(result.std_err) except Exception: # unsure if we're guaranteed a valid xml doc- use raw output in case of error pass return (result.status_code, result.std_out, result.std_err) # FUTURE: determine buffer size at runtime via remote winrm config? def _put_file_stdin_iterator(self, in_path, out_path, buffer_size=250000): in_size = os.path.getsize(to_bytes(in_path, errors='surrogate_or_strict')) offset = 0 with open(to_bytes(in_path, errors='surrogate_or_strict'), 'rb') as in_file: for out_data in iter((lambda: in_file.read(buffer_size)), b''): offset += len(out_data) self._display.vvvvv('WINRM PUT "%s" to "%s" (offset=%d size=%d)' % (in_path, out_path, offset, len(out_data)), host=self._winrm_host) # yes, we're double-encoding over the wire in this case- we want to ensure that the data shipped to the end PS pipeline is still b64-encoded b64_data = base64.b64encode(out_data) + b'\r\n' # cough up the data, as well as an indicator if this is the last chunk so winrm_send knows to set the End signal yield b64_data, (in_file.tell() == in_size) if offset == 0: # empty file, return an empty buffer + eof to close it yield "", True def put_file(self, in_path, out_path): super(Connection, self).put_file(in_path, out_path) out_path = self._shell._unquote(out_path) display.vvv('PUT "%s" TO "%s"' % (in_path, out_path), host=self._winrm_host) if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')): raise AnsibleFileNotFound('file or module does not exist: "%s"' % to_native(in_path)) script_template = u''' begin {{ $path = '{0}' $DebugPreference = "Continue" $ErrorActionPreference = "Stop" Set-StrictMode -Version 2 $fd = [System.IO.File]::Create($path) $sha1 = [System.Security.Cryptography.SHA1CryptoServiceProvider]::Create() $bytes = @() #initialize for empty file case }} process {{ $bytes = [System.Convert]::FromBase64String($input) $sha1.TransformBlock($bytes, 0, $bytes.Length, $bytes, 0) | Out-Null $fd.Write($bytes, 0, $bytes.Length) }} end {{ $sha1.TransformFinalBlock($bytes, 0, 0) | Out-Null $hash = [System.BitConverter]::ToString($sha1.Hash).Replace("-", "").ToLowerInvariant() $fd.Close() Write-Output "{{""sha1"":""$hash""}}" }} ''' script = script_template.format(self._shell._escape(out_path)) cmd_parts = self._shell._encode_script(script, as_list=True, strict_mode=False, preserve_rc=False) result = self._winrm_exec(cmd_parts[0], cmd_parts[1:], stdin_iterator=self._put_file_stdin_iterator(in_path, out_path)) # TODO: improve error handling if result.status_code != 0: raise AnsibleError(to_native(result.std_err)) try: put_output = json.loads(result.std_out) except ValueError: # stdout does not contain a valid response stderr = to_bytes(result.std_err, encoding='utf-8') if stderr.startswith(b"#< CLIXML"): stderr = _parse_clixml(stderr) raise AnsibleError('winrm put_file failed; \nstdout: %s\nstderr %s' % (to_native(result.std_out), to_native(stderr))) remote_sha1 = put_output.get("sha1") if not remote_sha1: raise AnsibleError("Remote sha1 was not returned") local_sha1 = secure_hash(in_path) if not remote_sha1 == local_sha1: raise AnsibleError("Remote sha1 hash {0} does not match local hash {1}".format(to_native(remote_sha1), to_native(local_sha1))) def fetch_file(self, in_path, out_path): super(Connection, self).fetch_file(in_path, out_path) in_path = self._shell._unquote(in_path) out_path = out_path.replace('\\', '/') # consistent with other connection plugins, we assume the caller has created the target dir display.vvv('FETCH "%s" TO "%s"' % (in_path, out_path), host=self._winrm_host) buffer_size = 2**19 # 0.5MB chunks out_file = None try: offset = 0 while True: try: script = ''' $path = "%(path)s" If (Test-Path -Path $path -PathType Leaf) { $buffer_size = %(buffer_size)d $offset = %(offset)d $stream = New-Object -TypeName IO.FileStream($path, [IO.FileMode]::Open, [IO.FileAccess]::Read, [IO.FileShare]::ReadWrite) $stream.Seek($offset, [System.IO.SeekOrigin]::Begin) > $null $buffer = New-Object -TypeName byte[] $buffer_size $bytes_read = $stream.Read($buffer, 0, $buffer_size) if ($bytes_read -gt 0) { $bytes = $buffer[0..($bytes_read - 1)] [System.Convert]::ToBase64String($bytes) } $stream.Close() > $null } ElseIf (Test-Path -Path $path -PathType Container) { Write-Host "[DIR]"; } Else { Write-Error "$path does not exist"; Exit 1; } ''' % dict(buffer_size=buffer_size, path=self._shell._escape(in_path), offset=offset) display.vvvvv('WINRM FETCH "%s" to "%s" (offset=%d)' % (in_path, out_path, offset), host=self._winrm_host) cmd_parts = self._shell._encode_script(script, as_list=True, preserve_rc=False) result = self._winrm_exec(cmd_parts[0], cmd_parts[1:]) if result.status_code != 0: raise IOError(to_native(result.std_err)) if result.std_out.strip() == '[DIR]': data = None else: data = base64.b64decode(result.std_out.strip()) if data is None: break else: if not out_file: # If out_path is a directory and we're expecting a file, bail out now. if os.path.isdir(to_bytes(out_path, errors='surrogate_or_strict')): break out_file = open(to_bytes(out_path, errors='surrogate_or_strict'), 'wb') out_file.write(data) if len(data) < buffer_size: break offset += len(data) except Exception: traceback.print_exc() raise AnsibleError('failed to transfer file to "%s"' % to_native(out_path)) finally: if out_file: out_file.close() def close(self): if self.protocol and self.shell_id: display.vvvvv('WINRM CLOSE SHELL: %s' % self.shell_id, host=self._winrm_host) self.protocol.close_shell(self.shell_id) self.shell_id = None self.protocol = None self._connected = False
closed
ansible/ansible
https://github.com/ansible/ansible
62,781
fetch fails on Windows filenames containing dollar sign
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> `fetch` module fails on Windows with file names containing dollar signs (winrm FETCH does not quote these from PowerShell). When using `|quote` or backslash-escaping the dollars, slurp, which is executed by fetch, does handle such paths correctly and therefore fails on pre-quoted ones. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> fetch ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.7.4 config file = /etc/ansible/ansible.cfg configured module search path = ['/etc/ansible/plugins/modules', '/etc/venus/common/ansible/modules'] ansible python module location = /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.6.1 (default, Oct 24 2017, 05:44:23) [GCC 5.3.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True CACHE_PLUGIN(/etc/ansible/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/etc/ansible/ansible.cfg) = /var/lib/ansible/facts CACHE_PLUGIN_TIMEOUT(/etc/ansible/ansible.cfg) = 86400 DEFAULT_ACTION_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/action', '/etc/venus/common/ansible/action'] DEFAULT_CALLBACK_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/callback', '/etc/venus/common/ansible/callback'] DEFAULT_CALLBACK_WHITELIST(/etc/ansible/ansible.cfg) = ['log_plays', 'playbook_name', 'json'] DEFAULT_CONNECTION_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/connection', '/etc/venus/common/ansible/connection'] DEFAULT_FILTER_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/filter', '/etc/venus/common/ansible/filter'] DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 30 DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = smart DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory/venus.yml', '/etc/ansible/inventory/hosts.yml'] DEFAULT_INVENTORY_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/inventory', '/etc/venus/common/ansible/inventory'] DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log DEFAULT_LOOKUP_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/lookup', '/etc/venus/common/ansible/lookup'] DEFAULT_MODULE_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/modules', '/etc/venus/common/ansible/modules'] DEFAULT_POLL_INTERVAL(/etc/ansible/ansible.cfg) = 15 DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/roles'] DEFAULT_STDOUT_CALLBACK(/etc/ansible/ansible.cfg) = debug DEFAULT_TRANSPORT(/etc/ansible/ansible.cfg) = venus DEFAULT_VARS_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/vars', '/etc/venus/common/ansible/vars'] INVENTORY_ENABLED(/etc/ansible/ansible.cfg) = ['venus', 'yaml', 'advanced_host_list', 'host_list'] RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> * Control host: CentOS 7.6 * target: Microsoft Windows Server 2008 R2 Standard Service Pack 1 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> Create a file such as `C:\Temp\file_with_$dollar.txt` <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Get Logs hosts: all vars: log_files: - "C:/Temp/testfile_with_$dollar.txt" tasks: - name: touch test files win_file: path: "{{ item }}" state: touch with_items: "{{ log_files }}" - name: fetch log files fetch: src: "{{ item }}" dest: "{{ tmp_dir }}/{{ item | basename }}" flat: yes with_items: "{{ log_files }}" ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> File gets copied from Windows remote to localhost. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ansible-playbook 2.7.4 config file = /etc/ansible/ansible.cfg configured module search path = ['/etc/ansible/plugins/modules', '/etc/venus/common/ansible/modules'] ansible python module location = /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible-playbook python version = 3.6.1 (default, Oct 24 2017, 05:44:23) [GCC 5.3.0] Using /etc/ansible/ansible.cfg as config file setting up inventory plugins Parsed /etc/ansible/inventory/venus.yml inventory source with venus plugin /etc/ansible/inventory/hosts.yml did not meet venus requirements, check plugin documentation if this is unexpected Set default localhost to localhost Parsed /etc/ansible/inventory/hosts.yml inventory source with yaml plugin Loading callback plugin debug of type stdout, v2.0 from /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/callback/debug.py Loading callback plugin log_plays of type notification, v2.0 from /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/callback/log_plays.py PLAYBOOK: test.yml *************************************************************************************************************************************************************************** 1 plays in test.yml PLAY [testclient] **************************************************************************************************************************************************************************** META: ran handlers TASK [touch test files] ********************************************************************************************************************************************************************** task path: /home/testuser/ansible/test.yml:7 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_file.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) changed: [testclient] => (item=C:/Temp/testfile_with_$dollar.txt) => { "changed": true, "item": "C:/Temp/testfile_with_$dollar.txt" } TASK [fetch log files] *********************************************************************************************************************************************************************** task path: /home/testuser/ansible/packages/co2mo/test.yml:12 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_stat.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) <testclient> FETCH "C:\Temp\testfile_with_$dollar.txt" TO "/home/testuser/ansible/testfile_with_$dollar.txt" Traceback (most recent call last): File "/usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/connection/winrm.py", line 681, in fetch_file raise IOError(to_native(result.std_err)) OSError: #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><S S="Error">The variable '$dollar' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:2 char:32_x000D__x000A_</S><S S="Error">+ $path = "C:\Temp\testfile_with_$dollar.txt"_x000D__x000A_</S><S S="Error">+ ~~~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (dollar:String) [], RuntimeExc _x000D__x000A_</S><S S="Error"> eption_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S><S S="Error">The variable '$path' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:3 char:21_x000D__x000A_</S><S S="Error">+ If (Test-Path -Path $path -PathType Leaf)_x000D__x000A_</S><S S="Error">+ ~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (path:String) [], RuntimeExcep _x000D__x000A_</S><S S="Error"> tion_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S></Objs> fatal: [testclient]: FAILED! => {} MSG: failed to transfer file to "/home/testuser/ansible//testfile_with_$dollar.txt" PLAY RECAP *********************************************************************************************************************************************************************************** testclient : ok=1 changed=1 unreachable=0 failed=1 ``` When replacing `src: "{{ item }}"` with `src: "{{ item|quote }}"`: ``` TASK [fetch log files] *********************************************************************************************************************************************************************** task path: /home/testuser/ansible/test.yml:12 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_stat.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) <testclient> FETCH "C:\Temp\testfile_with_$dollar.txt" TO "/home/testuser/ansible/testfile_with_$dollar.txt" Traceback (most recent call last): File "/usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/connection/winrm.py", line 681, in fetch_file raise IOError(to_native(result.std_err)) OSError: #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><S S="Error">The variable '$dollar' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:2 char:32_x000D__x000A_</S><S S="Error">+ $path = "C:\Temp\testfile_with_$dollar.txt"_x000D__x000A_</S><S S="Error">+ ~~~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (dollar:String) [], RuntimeExc _x000D__x000A_</S><S S="Error"> eption_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S><S S="Error">The variable '$path' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:3 char:21_x000D__x000A_</S><S S="Error">+ If (Test-Path -Path $path -PathType Leaf)_x000D__x000A_</S><S S="Error">+ ~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (path:String) [], RuntimeExcep _x000D__x000A_</S><S S="Error"> tion_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S></Objs> fatal: [testclient]: FAILED! => {} ``` When replacing `src: "{{ item }}"` with `src: "{{ item|replace('$', '\\$') }}"`: ``` TASK [fetch log files] *********************************************************************************************************************************************************************** task path: /home/testuser/ansible/test.yml:12 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_stat.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/slurp.ps1 EXEC (via pipeline wrapper) failed: [testclient] (item=C:/Temp/testfile_with_$dollar.txt) => { "changed": false, "item": "C:/Temp/testfile_with_$dollar.txt" } MSG: Path C:\Temp\testfile_with_\$dollar.txt is not found ``` ##### WORKAROUND ```yaml - name: create temporary copy of log files win_copy: remote_src: true src: "{{ item }}" dest: "{{ item | replace('$', 'DOLLAR') }}.transfer" force: no with_items: "{{ log_files }}" - name: fetch log files fetch: src: "{{ item | replace('$', 'DOLLAR') }}.transfer" dest: "{{ tmp_dir }}/{{ item | basename }}" flat: yes with_items: "{{ log_files }}" - name: remove temporary copy of log files win_file: path: "{{ item | replace('$', 'DOLLAR') }}.transfer" state: absent with_items: "{{ log_files }}" ```
https://github.com/ansible/ansible/issues/62781
https://github.com/ansible/ansible/pull/71411
8897d7e2ff8fa37c25cd4ba039984fd3a9e13b33
72a7cb4a2c3036da5e3abb32c50713a262d0c063
2019-09-24T11:15:44Z
python
2020-08-25T21:06:51Z
lib/ansible/plugins/shell/powershell.py
# Copyright (c) 2014, Chris Church <[email protected]> # Copyright (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type DOCUMENTATION = ''' name: powershell plugin_type: shell version_added: historical short_description: Windows PowerShell description: - The only option when using 'winrm' or 'psrp' as a connection plugin. - Can also be used when using 'ssh' as a connection plugin and the C(DefaultShell) has been configured to PowerShell. extends_documentation_fragment: - shell_windows ''' import base64 import os import re import shlex import pkgutil import xml.etree.ElementTree as ET import ntpath from ansible.module_utils._text import to_bytes, to_text from ansible.plugins.shell import ShellBase _common_args = ['PowerShell', '-NoProfile', '-NonInteractive', '-ExecutionPolicy', 'Unrestricted'] # Primarily for testing, allow explicitly specifying PowerShell version via # an environment variable. _powershell_version = os.environ.get('POWERSHELL_VERSION', None) if _powershell_version: _common_args = ['PowerShell', '-Version', _powershell_version] + _common_args[1:] def _parse_clixml(data, stream="Error"): """ Takes a byte string like '#< CLIXML\r\n<Objs...' and extracts the stream message encoded in the XML data. CLIXML is used by PowerShell to encode multiple objects in stderr. """ lines = [] # There are some scenarios where the stderr contains a nested CLIXML element like # '<# CLIXML\r\n<# CLIXML\r\n<Objs>...</Objs><Objs>...</Objs>'. # Parse each individual <Objs> element and add the error strings to our stderr list. # https://github.com/ansible/ansible/issues/69550 while data: end_idx = data.find(b"</Objs>") + 7 current_element = data[data.find(b"<Objs "):end_idx] data = data[end_idx:] clixml = ET.fromstring(current_element) namespace_match = re.match(r'{(.*)}', clixml.tag) namespace = "{%s}" % namespace_match.group(1) if namespace_match else "" strings = clixml.findall("./%sS" % namespace) lines.extend([e.text.replace('_x000D__x000A_', '') for e in strings if e.attrib.get('S') == stream]) return to_bytes('\r\n'.join(lines)) class ShellModule(ShellBase): # Common shell filenames that this plugin handles # Powershell is handled differently. It's selected when winrm is the # connection COMPATIBLE_SHELLS = frozenset() # Family of shells this has. Must match the filename without extension SHELL_FAMILY = 'powershell' _SHELL_REDIRECT_ALLNULL = '> $null' _SHELL_AND = ';' # Used by various parts of Ansible to do Windows specific changes _IS_WINDOWS = True # TODO: add binary module support def env_prefix(self, **kwargs): # powershell/winrm env handling is handled in the exec wrapper return "" def join_path(self, *args): # use normpath() to remove doubled slashed and convert forward to backslashes parts = [ntpath.normpath(self._unquote(arg)) for arg in args] # Becuase ntpath.join treats any component that begins with a backslash as an absolute path, # we have to strip slashes from at least the beginning, otherwise join will ignore all previous # path components except for the drive. return ntpath.join(parts[0], *[part.strip('\\') for part in parts[1:]]) def get_remote_filename(self, pathname): # powershell requires that script files end with .ps1 base_name = os.path.basename(pathname.strip()) name, ext = os.path.splitext(base_name.strip()) if ext.lower() not in ['.ps1', '.exe']: return name + '.ps1' return base_name.strip() def path_has_trailing_slash(self, path): # Allow Windows paths to be specified using either slash. path = self._unquote(path) return path.endswith('/') or path.endswith('\\') def chmod(self, paths, mode): raise NotImplementedError('chmod is not implemented for Powershell') def chown(self, paths, user): raise NotImplementedError('chown is not implemented for Powershell') def set_user_facl(self, paths, user, mode): raise NotImplementedError('set_user_facl is not implemented for Powershell') def remove(self, path, recurse=False): path = self._escape(self._unquote(path)) if recurse: return self._encode_script('''Remove-Item "%s" -Force -Recurse;''' % path) else: return self._encode_script('''Remove-Item "%s" -Force;''' % path) def mkdtemp(self, basefile=None, system=False, mode=None, tmpdir=None): # Windows does not have an equivalent for the system temp files, so # the param is ignored if not basefile: basefile = self.__class__._generate_temp_dir_name() basefile = self._escape(self._unquote(basefile)) basetmpdir = tmpdir if tmpdir else self.get_option('remote_tmp') script = ''' $tmp_path = [System.Environment]::ExpandEnvironmentVariables('%s') $tmp = New-Item -Type Directory -Path $tmp_path -Name '%s' Write-Output -InputObject $tmp.FullName ''' % (basetmpdir, basefile) return self._encode_script(script.strip()) def expand_user(self, user_home_path, username=''): # PowerShell only supports "~" (not "~username"). Resolve-Path ~ does # not seem to work remotely, though by default we are always starting # in the user's home directory. user_home_path = self._unquote(user_home_path) if user_home_path == '~': script = 'Write-Output (Get-Location).Path' elif user_home_path.startswith('~\\'): script = 'Write-Output ((Get-Location).Path + "%s")' % self._escape(user_home_path[1:]) else: script = 'Write-Output "%s"' % self._escape(user_home_path) return self._encode_script(script) def exists(self, path): path = self._escape(self._unquote(path)) script = ''' If (Test-Path "%s") { $res = 0; } Else { $res = 1; } Write-Output "$res"; Exit $res; ''' % path return self._encode_script(script) def checksum(self, path, *args, **kwargs): path = self._escape(self._unquote(path)) script = ''' If (Test-Path -PathType Leaf "%(path)s") { $sp = new-object -TypeName System.Security.Cryptography.SHA1CryptoServiceProvider; $fp = [System.IO.File]::Open("%(path)s", [System.IO.Filemode]::Open, [System.IO.FileAccess]::Read); [System.BitConverter]::ToString($sp.ComputeHash($fp)).Replace("-", "").ToLower(); $fp.Dispose(); } ElseIf (Test-Path -PathType Container "%(path)s") { Write-Output "3"; } Else { Write-Output "1"; } ''' % dict(path=path) return self._encode_script(script) def build_module_command(self, env_string, shebang, cmd, arg_path=None): bootstrap_wrapper = pkgutil.get_data("ansible.executor.powershell", "bootstrap_wrapper.ps1") # pipelining bypass if cmd == '': return self._encode_script(script=bootstrap_wrapper, strict_mode=False, preserve_rc=False) # non-pipelining cmd_parts = shlex.split(cmd, posix=False) cmd_parts = list(map(to_text, cmd_parts)) if shebang and shebang.lower() == '#!powershell': if not self._unquote(cmd_parts[0]).lower().endswith('.ps1'): # we're running a module via the bootstrap wrapper cmd_parts[0] = '"%s.ps1"' % self._unquote(cmd_parts[0]) wrapper_cmd = "type " + cmd_parts[0] + " | " + self._encode_script(script=bootstrap_wrapper, strict_mode=False, preserve_rc=False) return wrapper_cmd elif shebang and shebang.startswith('#!'): cmd_parts.insert(0, shebang[2:]) elif not shebang: # The module is assumed to be a binary cmd_parts[0] = self._unquote(cmd_parts[0]) cmd_parts.append(arg_path) script = ''' Try { %s %s } Catch { $_obj = @{ failed = $true } If ($_.Exception.GetType) { $_obj.Add('msg', $_.Exception.Message) } Else { $_obj.Add('msg', $_.ToString()) } If ($_.InvocationInfo.PositionMessage) { $_obj.Add('exception', $_.InvocationInfo.PositionMessage) } ElseIf ($_.ScriptStackTrace) { $_obj.Add('exception', $_.ScriptStackTrace) } Try { $_obj.Add('error_record', ($_ | ConvertTo-Json | ConvertFrom-Json)) } Catch { } Echo $_obj | ConvertTo-Json -Compress -Depth 99 Exit 1 } ''' % (env_string, ' '.join(cmd_parts)) return self._encode_script(script, preserve_rc=False) def wrap_for_exec(self, cmd): return '& %s; exit $LASTEXITCODE' % cmd def _unquote(self, value): '''Remove any matching quotes that wrap the given value.''' value = to_text(value or '') m = re.match(r'^\s*?\'(.*?)\'\s*?$', value) if m: return m.group(1) m = re.match(r'^\s*?"(.*?)"\s*?$', value) if m: return m.group(1) return value def _escape(self, value, include_vars=False): '''Return value escaped for use in PowerShell command.''' # http://www.techotopia.com/index.php/Windows_PowerShell_1.0_String_Quoting_and_Escape_Sequences # http://stackoverflow.com/questions/764360/a-list-of-string-replacements-in-python subs = [('\n', '`n'), ('\r', '`r'), ('\t', '`t'), ('\a', '`a'), ('\b', '`b'), ('\f', '`f'), ('\v', '`v'), ('"', '`"'), ('\'', '`\''), ('`', '``'), ('\x00', '`0')] if include_vars: subs.append(('$', '`$')) pattern = '|'.join('(%s)' % re.escape(p) for p, s in subs) substs = [s for p, s in subs] def replace(m): return substs[m.lastindex - 1] return re.sub(pattern, replace, value) def _encode_script(self, script, as_list=False, strict_mode=True, preserve_rc=True): '''Convert a PowerShell script to a single base64-encoded command.''' script = to_text(script) if script == u'-': cmd_parts = _common_args + ['-Command', '-'] else: if strict_mode: script = u'Set-StrictMode -Version Latest\r\n%s' % script # try to propagate exit code if present- won't work with begin/process/end-style scripts (ala put_file) # NB: the exit code returned may be incorrect in the case of a successful command followed by an invalid command if preserve_rc: script = u'%s\r\nIf (-not $?) { If (Get-Variable LASTEXITCODE -ErrorAction SilentlyContinue) { exit $LASTEXITCODE } Else { exit 1 } }\r\n'\ % script script = '\n'.join([x.strip() for x in script.splitlines() if x.strip()]) encoded_script = to_text(base64.b64encode(script.encode('utf-16-le')), 'utf-8') cmd_parts = _common_args + ['-EncodedCommand', encoded_script] if as_list: return cmd_parts return ' '.join(cmd_parts)
closed
ansible/ansible
https://github.com/ansible/ansible
62,781
fetch fails on Windows filenames containing dollar sign
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> `fetch` module fails on Windows with file names containing dollar signs (winrm FETCH does not quote these from PowerShell). When using `|quote` or backslash-escaping the dollars, slurp, which is executed by fetch, does handle such paths correctly and therefore fails on pre-quoted ones. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> fetch ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.7.4 config file = /etc/ansible/ansible.cfg configured module search path = ['/etc/ansible/plugins/modules', '/etc/venus/common/ansible/modules'] ansible python module location = /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.6.1 (default, Oct 24 2017, 05:44:23) [GCC 5.3.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True CACHE_PLUGIN(/etc/ansible/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/etc/ansible/ansible.cfg) = /var/lib/ansible/facts CACHE_PLUGIN_TIMEOUT(/etc/ansible/ansible.cfg) = 86400 DEFAULT_ACTION_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/action', '/etc/venus/common/ansible/action'] DEFAULT_CALLBACK_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/callback', '/etc/venus/common/ansible/callback'] DEFAULT_CALLBACK_WHITELIST(/etc/ansible/ansible.cfg) = ['log_plays', 'playbook_name', 'json'] DEFAULT_CONNECTION_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/connection', '/etc/venus/common/ansible/connection'] DEFAULT_FILTER_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/filter', '/etc/venus/common/ansible/filter'] DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 30 DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = smart DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory/venus.yml', '/etc/ansible/inventory/hosts.yml'] DEFAULT_INVENTORY_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/inventory', '/etc/venus/common/ansible/inventory'] DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log DEFAULT_LOOKUP_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/lookup', '/etc/venus/common/ansible/lookup'] DEFAULT_MODULE_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/modules', '/etc/venus/common/ansible/modules'] DEFAULT_POLL_INTERVAL(/etc/ansible/ansible.cfg) = 15 DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/roles'] DEFAULT_STDOUT_CALLBACK(/etc/ansible/ansible.cfg) = debug DEFAULT_TRANSPORT(/etc/ansible/ansible.cfg) = venus DEFAULT_VARS_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/vars', '/etc/venus/common/ansible/vars'] INVENTORY_ENABLED(/etc/ansible/ansible.cfg) = ['venus', 'yaml', 'advanced_host_list', 'host_list'] RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> * Control host: CentOS 7.6 * target: Microsoft Windows Server 2008 R2 Standard Service Pack 1 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> Create a file such as `C:\Temp\file_with_$dollar.txt` <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Get Logs hosts: all vars: log_files: - "C:/Temp/testfile_with_$dollar.txt" tasks: - name: touch test files win_file: path: "{{ item }}" state: touch with_items: "{{ log_files }}" - name: fetch log files fetch: src: "{{ item }}" dest: "{{ tmp_dir }}/{{ item | basename }}" flat: yes with_items: "{{ log_files }}" ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> File gets copied from Windows remote to localhost. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ansible-playbook 2.7.4 config file = /etc/ansible/ansible.cfg configured module search path = ['/etc/ansible/plugins/modules', '/etc/venus/common/ansible/modules'] ansible python module location = /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible-playbook python version = 3.6.1 (default, Oct 24 2017, 05:44:23) [GCC 5.3.0] Using /etc/ansible/ansible.cfg as config file setting up inventory plugins Parsed /etc/ansible/inventory/venus.yml inventory source with venus plugin /etc/ansible/inventory/hosts.yml did not meet venus requirements, check plugin documentation if this is unexpected Set default localhost to localhost Parsed /etc/ansible/inventory/hosts.yml inventory source with yaml plugin Loading callback plugin debug of type stdout, v2.0 from /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/callback/debug.py Loading callback plugin log_plays of type notification, v2.0 from /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/callback/log_plays.py PLAYBOOK: test.yml *************************************************************************************************************************************************************************** 1 plays in test.yml PLAY [testclient] **************************************************************************************************************************************************************************** META: ran handlers TASK [touch test files] ********************************************************************************************************************************************************************** task path: /home/testuser/ansible/test.yml:7 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_file.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) changed: [testclient] => (item=C:/Temp/testfile_with_$dollar.txt) => { "changed": true, "item": "C:/Temp/testfile_with_$dollar.txt" } TASK [fetch log files] *********************************************************************************************************************************************************************** task path: /home/testuser/ansible/packages/co2mo/test.yml:12 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_stat.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) <testclient> FETCH "C:\Temp\testfile_with_$dollar.txt" TO "/home/testuser/ansible/testfile_with_$dollar.txt" Traceback (most recent call last): File "/usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/connection/winrm.py", line 681, in fetch_file raise IOError(to_native(result.std_err)) OSError: #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><S S="Error">The variable '$dollar' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:2 char:32_x000D__x000A_</S><S S="Error">+ $path = "C:\Temp\testfile_with_$dollar.txt"_x000D__x000A_</S><S S="Error">+ ~~~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (dollar:String) [], RuntimeExc _x000D__x000A_</S><S S="Error"> eption_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S><S S="Error">The variable '$path' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:3 char:21_x000D__x000A_</S><S S="Error">+ If (Test-Path -Path $path -PathType Leaf)_x000D__x000A_</S><S S="Error">+ ~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (path:String) [], RuntimeExcep _x000D__x000A_</S><S S="Error"> tion_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S></Objs> fatal: [testclient]: FAILED! => {} MSG: failed to transfer file to "/home/testuser/ansible//testfile_with_$dollar.txt" PLAY RECAP *********************************************************************************************************************************************************************************** testclient : ok=1 changed=1 unreachable=0 failed=1 ``` When replacing `src: "{{ item }}"` with `src: "{{ item|quote }}"`: ``` TASK [fetch log files] *********************************************************************************************************************************************************************** task path: /home/testuser/ansible/test.yml:12 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_stat.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) <testclient> FETCH "C:\Temp\testfile_with_$dollar.txt" TO "/home/testuser/ansible/testfile_with_$dollar.txt" Traceback (most recent call last): File "/usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/connection/winrm.py", line 681, in fetch_file raise IOError(to_native(result.std_err)) OSError: #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><S S="Error">The variable '$dollar' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:2 char:32_x000D__x000A_</S><S S="Error">+ $path = "C:\Temp\testfile_with_$dollar.txt"_x000D__x000A_</S><S S="Error">+ ~~~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (dollar:String) [], RuntimeExc _x000D__x000A_</S><S S="Error"> eption_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S><S S="Error">The variable '$path' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:3 char:21_x000D__x000A_</S><S S="Error">+ If (Test-Path -Path $path -PathType Leaf)_x000D__x000A_</S><S S="Error">+ ~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (path:String) [], RuntimeExcep _x000D__x000A_</S><S S="Error"> tion_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S></Objs> fatal: [testclient]: FAILED! => {} ``` When replacing `src: "{{ item }}"` with `src: "{{ item|replace('$', '\\$') }}"`: ``` TASK [fetch log files] *********************************************************************************************************************************************************************** task path: /home/testuser/ansible/test.yml:12 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_stat.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/slurp.ps1 EXEC (via pipeline wrapper) failed: [testclient] (item=C:/Temp/testfile_with_$dollar.txt) => { "changed": false, "item": "C:/Temp/testfile_with_$dollar.txt" } MSG: Path C:\Temp\testfile_with_\$dollar.txt is not found ``` ##### WORKAROUND ```yaml - name: create temporary copy of log files win_copy: remote_src: true src: "{{ item }}" dest: "{{ item | replace('$', 'DOLLAR') }}.transfer" force: no with_items: "{{ log_files }}" - name: fetch log files fetch: src: "{{ item | replace('$', 'DOLLAR') }}.transfer" dest: "{{ tmp_dir }}/{{ item | basename }}" flat: yes with_items: "{{ log_files }}" - name: remove temporary copy of log files win_file: path: "{{ item | replace('$', 'DOLLAR') }}.transfer" state: absent with_items: "{{ log_files }}" ```
https://github.com/ansible/ansible/issues/62781
https://github.com/ansible/ansible/pull/71411
8897d7e2ff8fa37c25cd4ba039984fd3a9e13b33
72a7cb4a2c3036da5e3abb32c50713a262d0c063
2019-09-24T11:15:44Z
python
2020-08-25T21:06:51Z
test/integration/targets/win_fetch/meta/main.yml
closed
ansible/ansible
https://github.com/ansible/ansible
62,781
fetch fails on Windows filenames containing dollar sign
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> `fetch` module fails on Windows with file names containing dollar signs (winrm FETCH does not quote these from PowerShell). When using `|quote` or backslash-escaping the dollars, slurp, which is executed by fetch, does handle such paths correctly and therefore fails on pre-quoted ones. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> fetch ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.7.4 config file = /etc/ansible/ansible.cfg configured module search path = ['/etc/ansible/plugins/modules', '/etc/venus/common/ansible/modules'] ansible python module location = /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.6.1 (default, Oct 24 2017, 05:44:23) [GCC 5.3.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True CACHE_PLUGIN(/etc/ansible/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/etc/ansible/ansible.cfg) = /var/lib/ansible/facts CACHE_PLUGIN_TIMEOUT(/etc/ansible/ansible.cfg) = 86400 DEFAULT_ACTION_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/action', '/etc/venus/common/ansible/action'] DEFAULT_CALLBACK_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/callback', '/etc/venus/common/ansible/callback'] DEFAULT_CALLBACK_WHITELIST(/etc/ansible/ansible.cfg) = ['log_plays', 'playbook_name', 'json'] DEFAULT_CONNECTION_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/connection', '/etc/venus/common/ansible/connection'] DEFAULT_FILTER_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/filter', '/etc/venus/common/ansible/filter'] DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 30 DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = smart DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory/venus.yml', '/etc/ansible/inventory/hosts.yml'] DEFAULT_INVENTORY_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/inventory', '/etc/venus/common/ansible/inventory'] DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log DEFAULT_LOOKUP_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/lookup', '/etc/venus/common/ansible/lookup'] DEFAULT_MODULE_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/modules', '/etc/venus/common/ansible/modules'] DEFAULT_POLL_INTERVAL(/etc/ansible/ansible.cfg) = 15 DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/roles'] DEFAULT_STDOUT_CALLBACK(/etc/ansible/ansible.cfg) = debug DEFAULT_TRANSPORT(/etc/ansible/ansible.cfg) = venus DEFAULT_VARS_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/vars', '/etc/venus/common/ansible/vars'] INVENTORY_ENABLED(/etc/ansible/ansible.cfg) = ['venus', 'yaml', 'advanced_host_list', 'host_list'] RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> * Control host: CentOS 7.6 * target: Microsoft Windows Server 2008 R2 Standard Service Pack 1 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> Create a file such as `C:\Temp\file_with_$dollar.txt` <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Get Logs hosts: all vars: log_files: - "C:/Temp/testfile_with_$dollar.txt" tasks: - name: touch test files win_file: path: "{{ item }}" state: touch with_items: "{{ log_files }}" - name: fetch log files fetch: src: "{{ item }}" dest: "{{ tmp_dir }}/{{ item | basename }}" flat: yes with_items: "{{ log_files }}" ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> File gets copied from Windows remote to localhost. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ansible-playbook 2.7.4 config file = /etc/ansible/ansible.cfg configured module search path = ['/etc/ansible/plugins/modules', '/etc/venus/common/ansible/modules'] ansible python module location = /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible-playbook python version = 3.6.1 (default, Oct 24 2017, 05:44:23) [GCC 5.3.0] Using /etc/ansible/ansible.cfg as config file setting up inventory plugins Parsed /etc/ansible/inventory/venus.yml inventory source with venus plugin /etc/ansible/inventory/hosts.yml did not meet venus requirements, check plugin documentation if this is unexpected Set default localhost to localhost Parsed /etc/ansible/inventory/hosts.yml inventory source with yaml plugin Loading callback plugin debug of type stdout, v2.0 from /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/callback/debug.py Loading callback plugin log_plays of type notification, v2.0 from /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/callback/log_plays.py PLAYBOOK: test.yml *************************************************************************************************************************************************************************** 1 plays in test.yml PLAY [testclient] **************************************************************************************************************************************************************************** META: ran handlers TASK [touch test files] ********************************************************************************************************************************************************************** task path: /home/testuser/ansible/test.yml:7 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_file.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) changed: [testclient] => (item=C:/Temp/testfile_with_$dollar.txt) => { "changed": true, "item": "C:/Temp/testfile_with_$dollar.txt" } TASK [fetch log files] *********************************************************************************************************************************************************************** task path: /home/testuser/ansible/packages/co2mo/test.yml:12 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_stat.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) <testclient> FETCH "C:\Temp\testfile_with_$dollar.txt" TO "/home/testuser/ansible/testfile_with_$dollar.txt" Traceback (most recent call last): File "/usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/connection/winrm.py", line 681, in fetch_file raise IOError(to_native(result.std_err)) OSError: #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><S S="Error">The variable '$dollar' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:2 char:32_x000D__x000A_</S><S S="Error">+ $path = "C:\Temp\testfile_with_$dollar.txt"_x000D__x000A_</S><S S="Error">+ ~~~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (dollar:String) [], RuntimeExc _x000D__x000A_</S><S S="Error"> eption_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S><S S="Error">The variable '$path' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:3 char:21_x000D__x000A_</S><S S="Error">+ If (Test-Path -Path $path -PathType Leaf)_x000D__x000A_</S><S S="Error">+ ~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (path:String) [], RuntimeExcep _x000D__x000A_</S><S S="Error"> tion_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S></Objs> fatal: [testclient]: FAILED! => {} MSG: failed to transfer file to "/home/testuser/ansible//testfile_with_$dollar.txt" PLAY RECAP *********************************************************************************************************************************************************************************** testclient : ok=1 changed=1 unreachable=0 failed=1 ``` When replacing `src: "{{ item }}"` with `src: "{{ item|quote }}"`: ``` TASK [fetch log files] *********************************************************************************************************************************************************************** task path: /home/testuser/ansible/test.yml:12 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_stat.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) <testclient> FETCH "C:\Temp\testfile_with_$dollar.txt" TO "/home/testuser/ansible/testfile_with_$dollar.txt" Traceback (most recent call last): File "/usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/connection/winrm.py", line 681, in fetch_file raise IOError(to_native(result.std_err)) OSError: #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><S S="Error">The variable '$dollar' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:2 char:32_x000D__x000A_</S><S S="Error">+ $path = "C:\Temp\testfile_with_$dollar.txt"_x000D__x000A_</S><S S="Error">+ ~~~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (dollar:String) [], RuntimeExc _x000D__x000A_</S><S S="Error"> eption_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S><S S="Error">The variable '$path' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:3 char:21_x000D__x000A_</S><S S="Error">+ If (Test-Path -Path $path -PathType Leaf)_x000D__x000A_</S><S S="Error">+ ~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (path:String) [], RuntimeExcep _x000D__x000A_</S><S S="Error"> tion_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S></Objs> fatal: [testclient]: FAILED! => {} ``` When replacing `src: "{{ item }}"` with `src: "{{ item|replace('$', '\\$') }}"`: ``` TASK [fetch log files] *********************************************************************************************************************************************************************** task path: /home/testuser/ansible/test.yml:12 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_stat.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/slurp.ps1 EXEC (via pipeline wrapper) failed: [testclient] (item=C:/Temp/testfile_with_$dollar.txt) => { "changed": false, "item": "C:/Temp/testfile_with_$dollar.txt" } MSG: Path C:\Temp\testfile_with_\$dollar.txt is not found ``` ##### WORKAROUND ```yaml - name: create temporary copy of log files win_copy: remote_src: true src: "{{ item }}" dest: "{{ item | replace('$', 'DOLLAR') }}.transfer" force: no with_items: "{{ log_files }}" - name: fetch log files fetch: src: "{{ item | replace('$', 'DOLLAR') }}.transfer" dest: "{{ tmp_dir }}/{{ item | basename }}" flat: yes with_items: "{{ log_files }}" - name: remove temporary copy of log files win_file: path: "{{ item | replace('$', 'DOLLAR') }}.transfer" state: absent with_items: "{{ log_files }}" ```
https://github.com/ansible/ansible/issues/62781
https://github.com/ansible/ansible/pull/71411
8897d7e2ff8fa37c25cd4ba039984fd3a9e13b33
72a7cb4a2c3036da5e3abb32c50713a262d0c063
2019-09-24T11:15:44Z
python
2020-08-25T21:06:51Z
test/integration/targets/win_fetch/tasks/main.yml
# test code for the fetch module when using winrm connection # (c) 2014, Chris Church <[email protected]> # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. - name: define host-specific host_output_dir set_fact: host_output_dir: "{{ output_dir }}/{{ inventory_hostname }}" - name: clean out the test directory file: name={{ host_output_dir|mandatory }} state=absent delegate_to: localhost run_once: true - name: create the test directory file: name={{ host_output_dir }} state=directory delegate_to: localhost run_once: true - name: fetch a small file fetch: src="C:/Windows/win.ini" dest={{ host_output_dir }} register: fetch_small - name: check fetch small result assert: that: - "fetch_small.changed" - name: check file created by fetch small stat: path={{ fetch_small.dest }} delegate_to: localhost register: fetch_small_stat - name: verify fetched small file exists locally assert: that: - "fetch_small_stat.stat.exists" - "fetch_small_stat.stat.isreg" - "fetch_small_stat.stat.checksum == fetch_small.checksum" - name: fetch the same small file fetch: src="C:/Windows/win.ini" dest={{ host_output_dir }} register: fetch_small_again - name: check fetch small result again assert: that: - "not fetch_small_again.changed" - name: fetch a small file to flat namespace fetch: src="C:/Windows/win.ini" dest="{{ host_output_dir }}/" flat=yes register: fetch_flat - name: check fetch flat result assert: that: - "fetch_flat.changed" - name: check file created by fetch flat stat: path="{{ host_output_dir }}/win.ini" delegate_to: localhost register: fetch_flat_stat - name: verify fetched file exists locally in host_output_dir assert: that: - "fetch_flat_stat.stat.exists" - "fetch_flat_stat.stat.isreg" - "fetch_flat_stat.stat.checksum == fetch_flat.checksum" #- name: fetch a small file to flat directory (without trailing slash) # fetch: src="C:/Windows/win.ini" dest="{{ host_output_dir }}" flat=yes # register: fetch_flat_dir #- name: check fetch flat to directory result # assert: # that: # - "fetch_flat_dir is not changed" - name: fetch a large binary file fetch: src="C:/Windows/explorer.exe" dest={{ host_output_dir }} register: fetch_large - name: check fetch large binary file result assert: that: - "fetch_large.changed" - name: check file created by fetch large binary stat: path={{ fetch_large.dest }} delegate_to: localhost register: fetch_large_stat - name: verify fetched large file exists locally assert: that: - "fetch_large_stat.stat.exists" - "fetch_large_stat.stat.isreg" - "fetch_large_stat.stat.checksum == fetch_large.checksum" - name: fetch a large binary file again fetch: src="C:/Windows/explorer.exe" dest={{ host_output_dir }} register: fetch_large_again - name: check fetch large binary file result again assert: that: - "not fetch_large_again.changed" - name: fetch a small file using backslashes in src path fetch: src="C:\\Windows\\system.ini" dest={{ host_output_dir }} register: fetch_small_bs - name: check fetch small result with backslashes assert: that: - "fetch_small_bs.changed" - name: check file created by fetch small with backslashes stat: path={{ fetch_small_bs.dest }} delegate_to: localhost register: fetch_small_bs_stat - name: verify fetched small file with backslashes exists locally assert: that: - "fetch_small_bs_stat.stat.exists" - "fetch_small_bs_stat.stat.isreg" - "fetch_small_bs_stat.stat.checksum == fetch_small_bs.checksum" - name: attempt to fetch a non-existent file - do not fail on missing fetch: src="C:/this_file_should_not_exist.txt" dest={{ host_output_dir }} fail_on_missing=no register: fetch_missing_nofail - name: check fetch missing no fail result assert: that: - "fetch_missing_nofail is not failed" - "fetch_missing_nofail.msg" - "fetch_missing_nofail is not changed" - name: attempt to fetch a non-existent file - fail on missing fetch: src="~/this_file_should_not_exist.txt" dest={{ host_output_dir }} fail_on_missing=yes register: fetch_missing ignore_errors: true - name: check fetch missing with failure assert: that: - "fetch_missing is failed" - "fetch_missing.msg" - "fetch_missing is not changed" - name: attempt to fetch a non-existent file - fail on missing implicit fetch: src="~/this_file_should_not_exist.txt" dest={{ host_output_dir }} register: fetch_missing_implicit ignore_errors: true - name: check fetch missing with failure on implicit assert: that: - "fetch_missing_implicit is failed" - "fetch_missing_implicit.msg" - "fetch_missing_implicit is not changed" - name: attempt to fetch a directory fetch: src="C:\\Windows" dest={{ host_output_dir }} register: fetch_dir ignore_errors: true - name: check fetch directory result assert: that: # Doesn't fail anymore, only returns a message. - "fetch_dir is not changed" - "fetch_dir.msg"
closed
ansible/ansible
https://github.com/ansible/ansible
62,781
fetch fails on Windows filenames containing dollar sign
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> `fetch` module fails on Windows with file names containing dollar signs (winrm FETCH does not quote these from PowerShell). When using `|quote` or backslash-escaping the dollars, slurp, which is executed by fetch, does handle such paths correctly and therefore fails on pre-quoted ones. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> fetch ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.7.4 config file = /etc/ansible/ansible.cfg configured module search path = ['/etc/ansible/plugins/modules', '/etc/venus/common/ansible/modules'] ansible python module location = /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.6.1 (default, Oct 24 2017, 05:44:23) [GCC 5.3.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True CACHE_PLUGIN(/etc/ansible/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/etc/ansible/ansible.cfg) = /var/lib/ansible/facts CACHE_PLUGIN_TIMEOUT(/etc/ansible/ansible.cfg) = 86400 DEFAULT_ACTION_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/action', '/etc/venus/common/ansible/action'] DEFAULT_CALLBACK_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/callback', '/etc/venus/common/ansible/callback'] DEFAULT_CALLBACK_WHITELIST(/etc/ansible/ansible.cfg) = ['log_plays', 'playbook_name', 'json'] DEFAULT_CONNECTION_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/connection', '/etc/venus/common/ansible/connection'] DEFAULT_FILTER_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/filter', '/etc/venus/common/ansible/filter'] DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 30 DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = smart DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory/venus.yml', '/etc/ansible/inventory/hosts.yml'] DEFAULT_INVENTORY_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/inventory', '/etc/venus/common/ansible/inventory'] DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log DEFAULT_LOOKUP_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/lookup', '/etc/venus/common/ansible/lookup'] DEFAULT_MODULE_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/modules', '/etc/venus/common/ansible/modules'] DEFAULT_POLL_INTERVAL(/etc/ansible/ansible.cfg) = 15 DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/roles'] DEFAULT_STDOUT_CALLBACK(/etc/ansible/ansible.cfg) = debug DEFAULT_TRANSPORT(/etc/ansible/ansible.cfg) = venus DEFAULT_VARS_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/plugins/vars', '/etc/venus/common/ansible/vars'] INVENTORY_ENABLED(/etc/ansible/ansible.cfg) = ['venus', 'yaml', 'advanced_host_list', 'host_list'] RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> * Control host: CentOS 7.6 * target: Microsoft Windows Server 2008 R2 Standard Service Pack 1 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> Create a file such as `C:\Temp\file_with_$dollar.txt` <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Get Logs hosts: all vars: log_files: - "C:/Temp/testfile_with_$dollar.txt" tasks: - name: touch test files win_file: path: "{{ item }}" state: touch with_items: "{{ log_files }}" - name: fetch log files fetch: src: "{{ item }}" dest: "{{ tmp_dir }}/{{ item | basename }}" flat: yes with_items: "{{ log_files }}" ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> File gets copied from Windows remote to localhost. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ansible-playbook 2.7.4 config file = /etc/ansible/ansible.cfg configured module search path = ['/etc/ansible/plugins/modules', '/etc/venus/common/ansible/modules'] ansible python module location = /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible-playbook python version = 3.6.1 (default, Oct 24 2017, 05:44:23) [GCC 5.3.0] Using /etc/ansible/ansible.cfg as config file setting up inventory plugins Parsed /etc/ansible/inventory/venus.yml inventory source with venus plugin /etc/ansible/inventory/hosts.yml did not meet venus requirements, check plugin documentation if this is unexpected Set default localhost to localhost Parsed /etc/ansible/inventory/hosts.yml inventory source with yaml plugin Loading callback plugin debug of type stdout, v2.0 from /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/callback/debug.py Loading callback plugin log_plays of type notification, v2.0 from /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/callback/log_plays.py PLAYBOOK: test.yml *************************************************************************************************************************************************************************** 1 plays in test.yml PLAY [testclient] **************************************************************************************************************************************************************************** META: ran handlers TASK [touch test files] ********************************************************************************************************************************************************************** task path: /home/testuser/ansible/test.yml:7 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_file.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) changed: [testclient] => (item=C:/Temp/testfile_with_$dollar.txt) => { "changed": true, "item": "C:/Temp/testfile_with_$dollar.txt" } TASK [fetch log files] *********************************************************************************************************************************************************************** task path: /home/testuser/ansible/packages/co2mo/test.yml:12 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_stat.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) <testclient> FETCH "C:\Temp\testfile_with_$dollar.txt" TO "/home/testuser/ansible/testfile_with_$dollar.txt" Traceback (most recent call last): File "/usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/connection/winrm.py", line 681, in fetch_file raise IOError(to_native(result.std_err)) OSError: #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><S S="Error">The variable '$dollar' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:2 char:32_x000D__x000A_</S><S S="Error">+ $path = "C:\Temp\testfile_with_$dollar.txt"_x000D__x000A_</S><S S="Error">+ ~~~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (dollar:String) [], RuntimeExc _x000D__x000A_</S><S S="Error"> eption_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S><S S="Error">The variable '$path' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:3 char:21_x000D__x000A_</S><S S="Error">+ If (Test-Path -Path $path -PathType Leaf)_x000D__x000A_</S><S S="Error">+ ~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (path:String) [], RuntimeExcep _x000D__x000A_</S><S S="Error"> tion_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S></Objs> fatal: [testclient]: FAILED! => {} MSG: failed to transfer file to "/home/testuser/ansible//testfile_with_$dollar.txt" PLAY RECAP *********************************************************************************************************************************************************************************** testclient : ok=1 changed=1 unreachable=0 failed=1 ``` When replacing `src: "{{ item }}"` with `src: "{{ item|quote }}"`: ``` TASK [fetch log files] *********************************************************************************************************************************************************************** task path: /home/testuser/ansible/test.yml:12 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_stat.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) <testclient> FETCH "C:\Temp\testfile_with_$dollar.txt" TO "/home/testuser/ansible/testfile_with_$dollar.txt" Traceback (most recent call last): File "/usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/plugins/connection/winrm.py", line 681, in fetch_file raise IOError(to_native(result.std_err)) OSError: #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><S S="Error">The variable '$dollar' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:2 char:32_x000D__x000A_</S><S S="Error">+ $path = "C:\Temp\testfile_with_$dollar.txt"_x000D__x000A_</S><S S="Error">+ ~~~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (dollar:String) [], RuntimeExc _x000D__x000A_</S><S S="Error"> eption_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S><S S="Error">The variable '$path' cannot be retrieved because it has not been set._x000D__x000A_</S><S S="Error">At line:3 char:21_x000D__x000A_</S><S S="Error">+ If (Test-Path -Path $path -PathType Leaf)_x000D__x000A_</S><S S="Error">+ ~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidOperation: (path:String) [], RuntimeExcep _x000D__x000A_</S><S S="Error"> tion_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : VariableIsUndefined_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S></Objs> fatal: [testclient]: FAILED! => {} ``` When replacing `src: "{{ item }}"` with `src: "{{ item|replace('$', '\\$') }}"`: ``` TASK [fetch log files] *********************************************************************************************************************************************************************** task path: /home/testuser/ansible/test.yml:12 Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/win_stat.ps1 <testclient> ESTABLISH WINRM CONNECTION FOR USER: testuser@DOMAIN on PORT 5985 TO testclient checking if winrm_host testclient is an IPv6 address EXEC (via pipeline wrapper) Using module file /usr/local/ansible/2.7/lib/python3.6/site-packages/ansible/modules/windows/slurp.ps1 EXEC (via pipeline wrapper) failed: [testclient] (item=C:/Temp/testfile_with_$dollar.txt) => { "changed": false, "item": "C:/Temp/testfile_with_$dollar.txt" } MSG: Path C:\Temp\testfile_with_\$dollar.txt is not found ``` ##### WORKAROUND ```yaml - name: create temporary copy of log files win_copy: remote_src: true src: "{{ item }}" dest: "{{ item | replace('$', 'DOLLAR') }}.transfer" force: no with_items: "{{ log_files }}" - name: fetch log files fetch: src: "{{ item | replace('$', 'DOLLAR') }}.transfer" dest: "{{ tmp_dir }}/{{ item | basename }}" flat: yes with_items: "{{ log_files }}" - name: remove temporary copy of log files win_file: path: "{{ item | replace('$', 'DOLLAR') }}.transfer" state: absent with_items: "{{ log_files }}" ```
https://github.com/ansible/ansible/issues/62781
https://github.com/ansible/ansible/pull/71411
8897d7e2ff8fa37c25cd4ba039984fd3a9e13b33
72a7cb4a2c3036da5e3abb32c50713a262d0c063
2019-09-24T11:15:44Z
python
2020-08-25T21:06:51Z
test/sanity/ignore.txt
docs/docsite/rst/dev_guide/testing/sanity/no-smart-quotes.rst no-smart-quotes examples/play.yml shebang examples/scripts/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath examples/scripts/upgrade_to_ps3.ps1 pslint:PSCustomUseLiteralPath examples/scripts/upgrade_to_ps3.ps1 pslint:PSUseApprovedVerbs hacking/build-ansible.py shebang # only run by release engineers, Python 3.6+ required hacking/build_library/build_ansible/announce.py compile-2.6!skip # release process only, 3.6+ required hacking/build_library/build_ansible/announce.py compile-2.7!skip # release process only, 3.6+ required hacking/build_library/build_ansible/announce.py compile-3.5!skip # release process only, 3.6+ required hacking/build_library/build_ansible/commands.py compile-2.6!skip # release and docs process only, 3.6+ required hacking/build_library/build_ansible/commands.py compile-2.7!skip # release and docs process only, 3.6+ required hacking/build_library/build_ansible/commands.py compile-3.5!skip # release and docs process only, 3.6+ required hacking/build_library/build_ansible/command_plugins/dump_config.py compile-2.6!skip # docs build only, 3.6+ required hacking/build_library/build_ansible/command_plugins/dump_config.py compile-2.7!skip # docs build only, 3.6+ required hacking/build_library/build_ansible/command_plugins/dump_config.py compile-3.5!skip # docs build only, 3.6+ required hacking/build_library/build_ansible/command_plugins/dump_keywords.py compile-2.6!skip # docs build only, 3.6+ required hacking/build_library/build_ansible/command_plugins/dump_keywords.py compile-2.7!skip # docs build only, 3.6+ required hacking/build_library/build_ansible/command_plugins/dump_keywords.py compile-3.5!skip # docs build only, 3.6+ required hacking/build_library/build_ansible/command_plugins/generate_man.py compile-2.6!skip # docs build only, 3.6+ required hacking/build_library/build_ansible/command_plugins/generate_man.py compile-2.7!skip # docs build only, 3.6+ required hacking/build_library/build_ansible/command_plugins/generate_man.py compile-3.5!skip # docs build only, 3.6+ required hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-2.6!skip # release process only, 3.6+ required hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-2.7!skip # release process only, 3.6+ required hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-3.5!skip # release process only, 3.6+ required hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-2.6!skip # release process only, 3.6+ required hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-2.7!skip # release process only, 3.6+ required hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-3.5!skip # release process only, 3.6+ required hacking/build_library/build_ansible/command_plugins/update_intersphinx.py compile-2.6!skip # release process and docs build only, 3.6+ required hacking/build_library/build_ansible/command_plugins/update_intersphinx.py compile-2.7!skip # release process and docs build only, 3.6+ required hacking/build_library/build_ansible/command_plugins/update_intersphinx.py compile-3.5!skip # release process and docs build only, 3.6+ required lib/ansible/cli/console.py pylint:blacklisted-name lib/ansible/cli/scripts/ansible_cli_stub.py shebang lib/ansible/cli/scripts/ansible_connection_cli_stub.py shebang lib/ansible/config/base.yml no-unwanted-files lib/ansible/executor/playbook_executor.py pylint:blacklisted-name lib/ansible/executor/powershell/async_watchdog.ps1 pslint:PSCustomUseLiteralPath lib/ansible/executor/powershell/async_wrapper.ps1 pslint:PSCustomUseLiteralPath lib/ansible/executor/powershell/exec_wrapper.ps1 pslint:PSCustomUseLiteralPath lib/ansible/executor/task_queue_manager.py pylint:blacklisted-name lib/ansible/galaxy/collection.py compile-2.6!skip # 'ansible-galaxy collection' requires 2.7+ lib/ansible/module_utils/compat/_selectors2.py future-import-boilerplate # ignore bundled lib/ansible/module_utils/compat/_selectors2.py metaclass-boilerplate # ignore bundled lib/ansible/module_utils/compat/_selectors2.py pylint:blacklisted-name lib/ansible/module_utils/distro/__init__.py empty-init # breaks namespacing, bundled, do not override lib/ansible/module_utils/distro/_distro.py future-import-boilerplate # ignore bundled lib/ansible/module_utils/distro/_distro.py metaclass-boilerplate # ignore bundled lib/ansible/module_utils/distro/_distro.py no-assert lib/ansible/module_utils/distro/_distro.py pep8!skip # bundled code we don't want to modify lib/ansible/module_utils/facts/__init__.py empty-init # breaks namespacing, deprecate and eventually remove lib/ansible/module_utils/facts/network/linux.py pylint:blacklisted-name lib/ansible/module_utils/powershell/Ansible.ModuleUtils.ArgvParser.psm1 pslint:PSUseApprovedVerbs lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSProvideCommentHelp # need to agree on best format for comment location lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSUseApprovedVerbs lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSCustomUseLiteralPath lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSProvideCommentHelp lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSCustomUseLiteralPath lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSUseApprovedVerbs lib/ansible/module_utils/powershell/Ansible.ModuleUtils.LinkUtil.psm1 pslint:PSUseApprovedVerbs lib/ansible/module_utils/pycompat24.py no-get-exception lib/ansible/module_utils/six/__init__.py empty-init # breaks namespacing, bundled, do not override lib/ansible/module_utils/six/__init__.py future-import-boilerplate # ignore bundled lib/ansible/module_utils/six/__init__.py metaclass-boilerplate # ignore bundled lib/ansible/module_utils/six/__init__.py no-basestring lib/ansible/module_utils/six/__init__.py no-dict-iteritems lib/ansible/module_utils/six/__init__.py no-dict-iterkeys lib/ansible/module_utils/six/__init__.py no-dict-itervalues lib/ansible/module_utils/six/__init__.py replace-urlopen lib/ansible/module_utils/urls.py pylint:blacklisted-name lib/ansible/module_utils/urls.py replace-urlopen lib/ansible/modules/command.py validate-modules:doc-missing-type lib/ansible/modules/command.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/command.py validate-modules:parameter-list-no-elements lib/ansible/modules/command.py validate-modules:undocumented-parameter lib/ansible/modules/expect.py validate-modules:doc-missing-type lib/ansible/modules/assemble.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/blockinfile.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/blockinfile.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/copy.py pylint:blacklisted-name lib/ansible/modules/copy.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/copy.py validate-modules:doc-type-does-not-match-spec lib/ansible/modules/copy.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/copy.py validate-modules:undocumented-parameter lib/ansible/modules/file.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/file.py validate-modules:undocumented-parameter lib/ansible/modules/find.py use-argspec-type-path # fix needed lib/ansible/modules/lineinfile.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/lineinfile.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/lineinfile.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/replace.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/stat.py validate-modules:parameter-invalid lib/ansible/modules/stat.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/stat.py validate-modules:undocumented-parameter lib/ansible/modules/unarchive.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/unarchive.py validate-modules:parameter-list-no-elements lib/ansible/modules/get_url.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/uri.py pylint:blacklisted-name lib/ansible/modules/uri.py validate-modules:doc-required-mismatch lib/ansible/modules/uri.py validate-modules:parameter-list-no-elements lib/ansible/modules/uri.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/pip.py pylint:blacklisted-name lib/ansible/modules/pip.py validate-modules:doc-elements-mismatch lib/ansible/modules/pip.py validate-modules:invalid-ansiblemodule-schema lib/ansible/modules/apt.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/apt.py validate-modules:parameter-invalid lib/ansible/modules/apt.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/apt.py validate-modules:undocumented-parameter lib/ansible/modules/apt_key.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/apt_key.py validate-modules:undocumented-parameter lib/ansible/modules/apt_repository.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/apt_repository.py validate-modules:parameter-invalid lib/ansible/modules/apt_repository.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/apt_repository.py validate-modules:undocumented-parameter lib/ansible/modules/dnf.py validate-modules:doc-missing-type lib/ansible/modules/dnf.py validate-modules:doc-required-mismatch lib/ansible/modules/dnf.py validate-modules:parameter-invalid lib/ansible/modules/dnf.py validate-modules:parameter-list-no-elements lib/ansible/modules/dnf.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/dpkg_selections.py validate-modules:doc-missing-type lib/ansible/modules/dpkg_selections.py validate-modules:doc-required-mismatch lib/ansible/modules/package_facts.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/package_facts.py validate-modules:doc-missing-type lib/ansible/modules/package_facts.py validate-modules:parameter-list-no-elements lib/ansible/modules/rpm_key.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/yum.py pylint:blacklisted-name lib/ansible/modules/yum.py validate-modules:doc-missing-type lib/ansible/modules/yum.py validate-modules:parameter-invalid lib/ansible/modules/yum.py validate-modules:parameter-list-no-elements lib/ansible/modules/yum.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/yum_repository.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/yum_repository.py validate-modules:doc-missing-type lib/ansible/modules/yum_repository.py validate-modules:parameter-list-no-elements lib/ansible/modules/yum_repository.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/yum_repository.py validate-modules:undocumented-parameter lib/ansible/modules/git.py pylint:blacklisted-name lib/ansible/modules/git.py use-argspec-type-path lib/ansible/modules/git.py validate-modules:doc-missing-type lib/ansible/modules/git.py validate-modules:doc-required-mismatch lib/ansible/modules/git.py validate-modules:parameter-list-no-elements lib/ansible/modules/git.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/subversion.py validate-modules:doc-required-mismatch lib/ansible/modules/subversion.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/subversion.py validate-modules:undocumented-parameter lib/ansible/modules/getent.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/hostname.py validate-modules:invalid-ansiblemodule-schema lib/ansible/modules/hostname.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/iptables.py pylint:blacklisted-name lib/ansible/modules/iptables.py validate-modules:parameter-list-no-elements lib/ansible/modules/service.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/service.py validate-modules:use-run-command-not-popen lib/ansible/modules/setup.py validate-modules:doc-missing-type lib/ansible/modules/setup.py validate-modules:parameter-list-no-elements lib/ansible/modules/setup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/systemd.py validate-modules:parameter-invalid lib/ansible/modules/systemd.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/systemd.py validate-modules:return-syntax-error lib/ansible/modules/sysvinit.py validate-modules:parameter-list-no-elements lib/ansible/modules/sysvinit.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/sysvinit.py validate-modules:return-syntax-error lib/ansible/modules/user.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/user.py validate-modules:doc-default-incompatible-type lib/ansible/modules/user.py validate-modules:parameter-list-no-elements lib/ansible/modules/user.py validate-modules:use-run-command-not-popen lib/ansible/modules/async_status.py use-argspec-type-path lib/ansible/modules/async_status.py validate-modules!skip lib/ansible/modules/async_wrapper.py ansible-doc!skip # not an actual module lib/ansible/modules/async_wrapper.py pylint:ansible-bad-function # ignore, required lib/ansible/modules/async_wrapper.py use-argspec-type-path lib/ansible/modules/wait_for.py validate-modules:parameter-list-no-elements lib/ansible/parsing/vault/__init__.py pylint:blacklisted-name lib/ansible/playbook/base.py pylint:blacklisted-name lib/ansible/playbook/collectionsearch.py required-and-default-attributes # https://github.com/ansible/ansible/issues/61460 lib/ansible/playbook/helpers.py pylint:blacklisted-name lib/ansible/playbook/role/__init__.py pylint:blacklisted-name lib/ansible/plugins/action/normal.py action-plugin-docs # default action plugin for modules without a dedicated action plugin lib/ansible/plugins/cache/base.py ansible-doc!skip # not a plugin, but a stub for backwards compatibility lib/ansible/plugins/lookup/sequence.py pylint:blacklisted-name lib/ansible/plugins/strategy/__init__.py pylint:blacklisted-name lib/ansible/plugins/strategy/linear.py pylint:blacklisted-name lib/ansible/vars/hostvars.py pylint:blacklisted-name test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import # ignore, required for testing test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import-from # ignore, required for testing test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-function # ignore, required for testing test/integration/targets/ansible-test/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level test/integration/targets/ansible-test-docker/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level test/integration/targets/collections_plugin_namespace/collection_root/ansible_collections/my_ns/my_col/plugins/lookup/lookup_no_future_boilerplate.py future-import-boilerplate # testing Python 2.x implicit relative imports test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util2.py pylint:relative-beyond-top-level test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util3.py pylint:relative-beyond-top-level test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/modules/my_module.py pylint:relative-beyond-top-level test/integration/targets/gathering_facts/library/bogus_facts shebang test/integration/targets/gathering_facts/library/facts_one shebang test/integration/targets/gathering_facts/library/facts_two shebang test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xSetReboot/ANSIBLE_xSetReboot.psm1 pslint!skip test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/xTestDsc.psd1 pslint!skip test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.1/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.1/xTestDsc.psd1 pslint!skip test/integration/targets/incidental_win_ping/library/win_ping_syntax_error.ps1 pslint!skip test/integration/targets/incidental_win_reboot/templates/post_reboot.ps1 pslint!skip test/integration/targets/lookup_csvfile/files/crlf.csv line-endings test/integration/targets/lookup_ini/lookup-8859-15.ini no-smart-quotes test/integration/targets/module_precedence/lib_with_extension/a.ini shebang test/integration/targets/module_precedence/lib_with_extension/ping.ini shebang test/integration/targets/module_precedence/roles_with_extension/foo/library/a.ini shebang test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.ini shebang test/integration/targets/module_utils/library/test.py future-import-boilerplate # allow testing of Python 2.x implicit relative imports test/integration/targets/module_utils/module_utils/bar0/foo.py pylint:blacklisted-name test/integration/targets/module_utils/module_utils/foo.py pylint:blacklisted-name test/integration/targets/module_utils/module_utils/sub/bar/__init__.py pylint:blacklisted-name test/integration/targets/module_utils/module_utils/sub/bar/bar.py pylint:blacklisted-name test/integration/targets/module_utils/module_utils/yak/zebra/foo.py pylint:blacklisted-name test/integration/targets/old_style_modules_posix/library/helloworld.sh shebang test/integration/targets/template/files/encoding_1252_utf-8.expected no-smart-quotes test/integration/targets/template/files/encoding_1252_windows-1252.expected no-smart-quotes test/integration/targets/template/files/foo.dos.txt line-endings test/integration/targets/template/templates/encoding_1252.j2 no-smart-quotes test/integration/targets/unicode/unicode.yml no-smart-quotes test/integration/targets/win_exec_wrapper/library/test_fail.ps1 pslint:PSCustomUseLiteralPath test/integration/targets/win_exec_wrapper/tasks/main.yml no-smart-quotes # We are explicitly testing smart quote support for env vars test/integration/targets/win_module_utils/library/legacy_only_new_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings test/integration/targets/win_module_utils/library/legacy_only_old_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings test/integration/targets/win_script/files/test_script.ps1 pslint:PSAvoidUsingWriteHost # Keep test/integration/targets/win_script/files/test_script_creates_file.ps1 pslint:PSAvoidUsingCmdletAliases test/integration/targets/win_script/files/test_script_removes_file.ps1 pslint:PSCustomUseLiteralPath test/integration/targets/win_script/files/test_script_with_args.ps1 pslint:PSAvoidUsingWriteHost # Keep test/integration/targets/win_script/files/test_script_with_splatting.ps1 pslint:PSAvoidUsingWriteHost # Keep test/integration/targets/windows-minimal/library/win_ping_syntax_error.ps1 pslint!skip test/lib/ansible_test/_data/requirements/constraints.txt test-constraints test/lib/ansible_test/_data/requirements/integration.cloud.azure.txt test-constraints test/lib/ansible_test/_data/requirements/sanity.ps1 pslint:PSCustomUseLiteralPath # Uses wildcards on purpose test/lib/ansible_test/_data/sanity/pylint/plugins/string_format.py use-compat-six test/lib/ansible_test/_data/setup/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath test/lib/ansible_test/_data/setup/windows-httptester.ps1 pslint:PSCustomUseLiteralPath test/support/integration/plugins/module_utils/ansible_tower.py future-import-boilerplate test/support/integration/plugins/module_utils/ansible_tower.py metaclass-boilerplate test/support/integration/plugins/module_utils/cloud.py future-import-boilerplate test/support/integration/plugins/module_utils/cloud.py metaclass-boilerplate test/support/integration/plugins/module_utils/compat/ipaddress.py future-import-boilerplate test/support/integration/plugins/module_utils/compat/ipaddress.py metaclass-boilerplate test/support/integration/plugins/module_utils/compat/ipaddress.py no-unicode-literals test/support/integration/plugins/module_utils/database.py future-import-boilerplate test/support/integration/plugins/module_utils/database.py metaclass-boilerplate test/support/integration/plugins/module_utils/k8s/common.py metaclass-boilerplate test/support/integration/plugins/module_utils/k8s/raw.py metaclass-boilerplate test/support/integration/plugins/module_utils/mysql.py future-import-boilerplate test/support/integration/plugins/module_utils/mysql.py metaclass-boilerplate test/support/integration/plugins/module_utils/net_tools/nios/api.py future-import-boilerplate test/support/integration/plugins/module_utils/net_tools/nios/api.py metaclass-boilerplate test/support/integration/plugins/module_utils/network/common/utils.py future-import-boilerplate test/support/integration/plugins/module_utils/network/common/utils.py metaclass-boilerplate test/support/integration/plugins/module_utils/postgres.py future-import-boilerplate test/support/integration/plugins/module_utils/postgres.py metaclass-boilerplate test/support/integration/plugins/modules/lvg.py pylint:blacklisted-name test/support/integration/plugins/modules/synchronize.py pylint:blacklisted-name test/support/integration/plugins/modules/timezone.py pylint:blacklisted-name test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/doc_fragments/netconf.py future-import-boilerplate test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/doc_fragments/netconf.py metaclass-boilerplate test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/doc_fragments/network_agnostic.py future-import-boilerplate test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/doc_fragments/network_agnostic.py metaclass-boilerplate test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py future-import-boilerplate test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py metaclass-boilerplate test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py no-unicode-literals test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py pep8:E203 test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/cfg/base.py future-import-boilerplate test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/cfg/base.py metaclass-boilerplate test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/config.py future-import-boilerplate test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/config.py metaclass-boilerplate test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/facts/facts.py future-import-boilerplate test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/facts/facts.py metaclass-boilerplate test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/netconf.py future-import-boilerplate test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/netconf.py metaclass-boilerplate test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/network.py future-import-boilerplate test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/network.py metaclass-boilerplate test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/parsing.py future-import-boilerplate test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/parsing.py metaclass-boilerplate test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/utils.py future-import-boilerplate test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/utils.py metaclass-boilerplate test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/netconf/netconf.py future-import-boilerplate test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/netconf/netconf.py metaclass-boilerplate test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/restconf/restconf.py future-import-boilerplate test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/restconf/restconf.py metaclass-boilerplate test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/doc_fragments/ios.py future-import-boilerplate test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/doc_fragments/ios.py metaclass-boilerplate test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/module_utils/network/ios/ios.py future-import-boilerplate test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/module_utils/network/ios/ios.py metaclass-boilerplate test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_command.py future-import-boilerplate test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_command.py metaclass-boilerplate test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py future-import-boilerplate test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py metaclass-boilerplate test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py pep8:E501 test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/doc_fragments/vyos.py future-import-boilerplate test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/doc_fragments/vyos.py metaclass-boilerplate test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/module_utils/network/vyos/vyos.py future-import-boilerplate test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/module_utils/network/vyos/vyos.py metaclass-boilerplate test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py future-import-boilerplate test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py metaclass-boilerplate test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pep8:E231 test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pylint:blacklisted-name test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_config.py future-import-boilerplate test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_config.py metaclass-boilerplate test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_facts.py future-import-boilerplate test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_facts.py metaclass-boilerplate test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_logging.py future-import-boilerplate test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_logging.py metaclass-boilerplate test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_static_route.py future-import-boilerplate test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_static_route.py metaclass-boilerplate test/support/windows-integration/plugins/modules/async_status.ps1 pslint!skip test/support/windows-integration/plugins/modules/setup.ps1 pslint!skip test/support/windows-integration/plugins/modules/win_copy.ps1 pslint!skip test/support/windows-integration/plugins/modules/win_dsc.ps1 pslint!skip test/support/windows-integration/plugins/modules/win_feature.ps1 pslint!skip test/support/windows-integration/plugins/modules/win_find.ps1 pslint!skip test/support/windows-integration/plugins/modules/win_lineinfile.ps1 pslint!skip test/support/windows-integration/plugins/modules/win_regedit.ps1 pslint!skip test/support/windows-integration/plugins/modules/win_security_policy.ps1 pslint!skip test/support/windows-integration/plugins/modules/win_shell.ps1 pslint!skip test/support/windows-integration/plugins/modules/win_wait_for.ps1 pslint!skip test/units/executor/test_play_iterator.py pylint:blacklisted-name test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-no-version test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-version test/units/module_utils/basic/test_run_command.py pylint:blacklisted-name test/units/module_utils/urls/fixtures/multipart.txt line-endings # Fixture for HTTP tests that use CRLF test/units/module_utils/urls/test_Request.py replace-urlopen test/units/module_utils/urls/test_fetch_url.py replace-urlopen test/units/modules/test_apt.py pylint:blacklisted-name test/units/parsing/vault/test_vault.py pylint:blacklisted-name test/units/playbook/role/test_role.py pylint:blacklisted-name test/units/plugins/test_plugins.py pylint:blacklisted-name test/units/template/test_templar.py pylint:blacklisted-name test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/module_utils/my_util.py future-import-boilerplate # test expects no boilerplate test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/module_utils/my_util.py metaclass-boilerplate # test expects no boilerplate test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/action/my_action.py pylint:relative-beyond-top-level test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/modules/__init__.py empty-init # testing that collections don't need inits test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/__init__.py empty-init # testing that collections don't need inits test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/ansible/__init__.py empty-init # testing that collections don't need inits test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/__init__.py empty-init # testing that collections don't need inits test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/testcoll/__init__.py empty-init # testing that collections don't need inits test/units/utils/collection_loader/test_collection_loader.py pylint:undefined-variable # magic runtime local var splatting test/utils/shippable/check_matrix.py replace-urlopen test/utils/shippable/timing.py shebang
closed
ansible/ansible
https://github.com/ansible/ansible
71,443
ansible-test coverage leads to 'Namespace' object has no attribute 'remote_endpoint'
##### SUMMARY We are running `ansible-test coverage` from the devel branch of Ansible on the `community.kubernetes` collection. It seems like sometime between yesterday morning and today, something in Ansible changed that is causing the error: ``` 'Namespace' object has no attribute 'remote_endpoint' ``` See: https://github.com/ansible-collections/community.kubernetes/runs/1025104583?check_suite_focus=true#step:6:17 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible-test ##### ANSIBLE VERSION ```paste below devel (installed via pip) ``` ##### CONFIGURATION ```paste below N/A ``` ##### OS / ENVIRONMENT Ubuntu 20.04 (GitHub Actions CI) ##### STEPS TO REPRODUCE ``` # Inside the collection.kubernetes directory: pip install https://github.com/ansible/ansible/archive/devel.tar.gz ansible-test integration --docker -v --color --retry-on-error --python 3.6 --continue-on-error --diff --coverage ansible-test coverage xml -v --requirements --group-by command --group-by version ``` ##### EXPECTED RESULTS Coverage report is generated. ##### ACTUAL RESULTS ``` Run ansible-test coverage xml -v --requirements --group-by command --group-by version ansible-test coverage xml -v --requirements --group-by command --group-by version shell: /bin/bash -e {0} env: pythonLocation: /opt/hostedtoolcache/Python/3.6.11/x64 Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.6.11/x64/bin/ansible-test", line 28, in <module> main() File "/opt/hostedtoolcache/Python/3.6.11/x64/bin/ansible-test", line 24, in main cli_main() File "/opt/hostedtoolcache/Python/3.6.11/x64/lib/python3.6/site-packages/ansible_test/_internal/cli.py", line 159, in main config = args.config(args) # type: CommonConfig File "/opt/hostedtoolcache/Python/3.6.11/x64/lib/python3.6/site-packages/ansible_test/_internal/coverage/__init__.py", line 59, in __init__ super(CoverageConfig, self).__init__(args, 'coverage') File "/opt/hostedtoolcache/Python/3.6.11/x64/lib/python3.6/site-packages/ansible_test/_internal/config.py", line 99, in __init__ self.remote_endpoint = args.remote_endpoint # type: t.Optional[str] AttributeError: 'Namespace' object has no attribute 'remote_endpoint' ```
https://github.com/ansible/ansible/issues/71443
https://github.com/ansible/ansible/pull/71446
72a7cb4a2c3036da5e3abb32c50713a262d0c063
f5b6df14ab2e691f5f059fff0fcf59449132549f
2020-08-25T15:00:33Z
python
2020-08-26T04:23:44Z
changelogs/fragments/ansible-test-coverage-py26.yml
closed
ansible/ansible
https://github.com/ansible/ansible
71,443
ansible-test coverage leads to 'Namespace' object has no attribute 'remote_endpoint'
##### SUMMARY We are running `ansible-test coverage` from the devel branch of Ansible on the `community.kubernetes` collection. It seems like sometime between yesterday morning and today, something in Ansible changed that is causing the error: ``` 'Namespace' object has no attribute 'remote_endpoint' ``` See: https://github.com/ansible-collections/community.kubernetes/runs/1025104583?check_suite_focus=true#step:6:17 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible-test ##### ANSIBLE VERSION ```paste below devel (installed via pip) ``` ##### CONFIGURATION ```paste below N/A ``` ##### OS / ENVIRONMENT Ubuntu 20.04 (GitHub Actions CI) ##### STEPS TO REPRODUCE ``` # Inside the collection.kubernetes directory: pip install https://github.com/ansible/ansible/archive/devel.tar.gz ansible-test integration --docker -v --color --retry-on-error --python 3.6 --continue-on-error --diff --coverage ansible-test coverage xml -v --requirements --group-by command --group-by version ``` ##### EXPECTED RESULTS Coverage report is generated. ##### ACTUAL RESULTS ``` Run ansible-test coverage xml -v --requirements --group-by command --group-by version ansible-test coverage xml -v --requirements --group-by command --group-by version shell: /bin/bash -e {0} env: pythonLocation: /opt/hostedtoolcache/Python/3.6.11/x64 Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.6.11/x64/bin/ansible-test", line 28, in <module> main() File "/opt/hostedtoolcache/Python/3.6.11/x64/bin/ansible-test", line 24, in main cli_main() File "/opt/hostedtoolcache/Python/3.6.11/x64/lib/python3.6/site-packages/ansible_test/_internal/cli.py", line 159, in main config = args.config(args) # type: CommonConfig File "/opt/hostedtoolcache/Python/3.6.11/x64/lib/python3.6/site-packages/ansible_test/_internal/coverage/__init__.py", line 59, in __init__ super(CoverageConfig, self).__init__(args, 'coverage') File "/opt/hostedtoolcache/Python/3.6.11/x64/lib/python3.6/site-packages/ansible_test/_internal/config.py", line 99, in __init__ self.remote_endpoint = args.remote_endpoint # type: t.Optional[str] AttributeError: 'Namespace' object has no attribute 'remote_endpoint' ```
https://github.com/ansible/ansible/issues/71443
https://github.com/ansible/ansible/pull/71446
72a7cb4a2c3036da5e3abb32c50713a262d0c063
f5b6df14ab2e691f5f059fff0fcf59449132549f
2020-08-25T15:00:33Z
python
2020-08-26T04:23:44Z
test/integration/targets/ansible-test/collection-tests/coverage.sh
closed
ansible/ansible
https://github.com/ansible/ansible
71,443
ansible-test coverage leads to 'Namespace' object has no attribute 'remote_endpoint'
##### SUMMARY We are running `ansible-test coverage` from the devel branch of Ansible on the `community.kubernetes` collection. It seems like sometime between yesterday morning and today, something in Ansible changed that is causing the error: ``` 'Namespace' object has no attribute 'remote_endpoint' ``` See: https://github.com/ansible-collections/community.kubernetes/runs/1025104583?check_suite_focus=true#step:6:17 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible-test ##### ANSIBLE VERSION ```paste below devel (installed via pip) ``` ##### CONFIGURATION ```paste below N/A ``` ##### OS / ENVIRONMENT Ubuntu 20.04 (GitHub Actions CI) ##### STEPS TO REPRODUCE ``` # Inside the collection.kubernetes directory: pip install https://github.com/ansible/ansible/archive/devel.tar.gz ansible-test integration --docker -v --color --retry-on-error --python 3.6 --continue-on-error --diff --coverage ansible-test coverage xml -v --requirements --group-by command --group-by version ``` ##### EXPECTED RESULTS Coverage report is generated. ##### ACTUAL RESULTS ``` Run ansible-test coverage xml -v --requirements --group-by command --group-by version ansible-test coverage xml -v --requirements --group-by command --group-by version shell: /bin/bash -e {0} env: pythonLocation: /opt/hostedtoolcache/Python/3.6.11/x64 Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.6.11/x64/bin/ansible-test", line 28, in <module> main() File "/opt/hostedtoolcache/Python/3.6.11/x64/bin/ansible-test", line 24, in main cli_main() File "/opt/hostedtoolcache/Python/3.6.11/x64/lib/python3.6/site-packages/ansible_test/_internal/cli.py", line 159, in main config = args.config(args) # type: CommonConfig File "/opt/hostedtoolcache/Python/3.6.11/x64/lib/python3.6/site-packages/ansible_test/_internal/coverage/__init__.py", line 59, in __init__ super(CoverageConfig, self).__init__(args, 'coverage') File "/opt/hostedtoolcache/Python/3.6.11/x64/lib/python3.6/site-packages/ansible_test/_internal/config.py", line 99, in __init__ self.remote_endpoint = args.remote_endpoint # type: t.Optional[str] AttributeError: 'Namespace' object has no attribute 'remote_endpoint' ```
https://github.com/ansible/ansible/issues/71443
https://github.com/ansible/ansible/pull/71446
72a7cb4a2c3036da5e3abb32c50713a262d0c063
f5b6df14ab2e691f5f059fff0fcf59449132549f
2020-08-25T15:00:33Z
python
2020-08-26T04:23:44Z
test/lib/ansible_test/_internal/cli.py
"""Test runner for all Ansible tests.""" from __future__ import (absolute_import, division, print_function) __metaclass__ = type import errno import os import sys # This import should occur as early as possible. # It must occur before subprocess has been imported anywhere in the current process. from .init import ( CURRENT_RLIMIT_NOFILE, ) from . import types as t from .util import ( ApplicationError, display, raw_command, generate_pip_command, read_lines_without_comments, MAXFD, ANSIBLE_TEST_DATA_ROOT, ) from .delegation import ( check_delegation_args, delegate, ) from .executor import ( command_posix_integration, command_network_integration, command_windows_integration, command_shell, SUPPORTED_PYTHON_VERSIONS, ApplicationWarning, Delegate, generate_pip_install, check_startup, ) from .config import ( PosixIntegrationConfig, WindowsIntegrationConfig, NetworkIntegrationConfig, SanityConfig, UnitsConfig, ShellConfig, ) from .env import ( EnvConfig, command_env, configure_timeout, ) from .sanity import ( command_sanity, sanity_init, sanity_get_tests, ) from .units import ( command_units, ) from .target import ( find_target_completion, walk_posix_integration_targets, walk_network_integration_targets, walk_windows_integration_targets, walk_units_targets, walk_sanity_targets, ) from .core_ci import ( AWS_ENDPOINTS, ) from .cloud import ( initialize_cloud_plugins, ) from .data import ( data_context, ) from .util_common import ( get_docker_completion, get_network_completion, get_remote_completion, CommonConfig, ) from .coverage.combine import ( command_coverage_combine, ) from .coverage.erase import ( command_coverage_erase, ) from .coverage.html import ( command_coverage_html, ) from .coverage.report import ( command_coverage_report, CoverageReportConfig, ) from .coverage.xml import ( command_coverage_xml, ) from .coverage.analyze.targets.generate import ( command_coverage_analyze_targets_generate, CoverageAnalyzeTargetsGenerateConfig, ) from .coverage.analyze.targets.expand import ( command_coverage_analyze_targets_expand, CoverageAnalyzeTargetsExpandConfig, ) from .coverage.analyze.targets.filter import ( command_coverage_analyze_targets_filter, CoverageAnalyzeTargetsFilterConfig, ) from .coverage.analyze.targets.combine import ( command_coverage_analyze_targets_combine, CoverageAnalyzeTargetsCombineConfig, ) from .coverage.analyze.targets.missing import ( command_coverage_analyze_targets_missing, CoverageAnalyzeTargetsMissingConfig, ) from .coverage import ( COVERAGE_GROUPS, CoverageConfig, ) if t.TYPE_CHECKING: import argparse as argparse_module def main(): """Main program function.""" try: os.chdir(data_context().content.root) initialize_cloud_plugins() sanity_init() args = parse_args() config = args.config(args) # type: CommonConfig display.verbosity = config.verbosity display.truncate = config.truncate display.redact = config.redact display.color = config.color display.info_stderr = config.info_stderr check_startup() check_delegation_args(config) configure_timeout(config) display.info('RLIMIT_NOFILE: %s' % (CURRENT_RLIMIT_NOFILE,), verbosity=2) display.info('MAXFD: %d' % MAXFD, verbosity=2) try: args.func(config) delegate_args = None except Delegate as ex: # save delegation args for use once we exit the exception handler delegate_args = (ex.exclude, ex.require, ex.integration_targets) if delegate_args: # noinspection PyTypeChecker delegate(config, *delegate_args) display.review_warnings() except ApplicationWarning as ex: display.warning(u'%s' % ex) sys.exit(0) except ApplicationError as ex: display.error(u'%s' % ex) sys.exit(1) except KeyboardInterrupt: sys.exit(2) except IOError as ex: if ex.errno == errno.EPIPE: sys.exit(3) raise def parse_args(): """Parse command line arguments.""" try: import argparse except ImportError: if '--requirements' not in sys.argv: raise # install argparse without using constraints since pip may be too old to support them # not using the ansible-test requirements file since this install is for sys.executable rather than the delegated python (which may be different) # argparse has no special requirements, so upgrading pip is not required here raw_command(generate_pip_install(generate_pip_command(sys.executable), '', packages=['argparse'], use_constraints=False)) import argparse try: import argcomplete except ImportError: argcomplete = None if argcomplete: epilog = 'Tab completion available using the "argcomplete" python package.' else: epilog = 'Install the "argcomplete" python package to enable tab completion.' def key_value_type(value): # type: (str) -> t.Tuple[str, str] """Wrapper around key_value.""" return key_value(argparse, value) parser = argparse.ArgumentParser(epilog=epilog) common = argparse.ArgumentParser(add_help=False) common.add_argument('-e', '--explain', action='store_true', help='explain commands that would be executed') common.add_argument('-v', '--verbose', dest='verbosity', action='count', default=0, help='display more output') common.add_argument('--color', metavar='COLOR', nargs='?', help='generate color output: %(choices)s', choices=('yes', 'no', 'auto'), const='yes', default='auto') common.add_argument('--debug', action='store_true', help='run ansible commands in debug mode') # noinspection PyTypeChecker common.add_argument('--truncate', dest='truncate', metavar='COLUMNS', type=int, default=display.columns, help='truncate some long output (0=disabled) (default: auto)') common.add_argument('--redact', dest='redact', action='store_true', default=True, help='redact sensitive values in output') common.add_argument('--no-redact', dest='redact', action='store_false', default=False, help='show sensitive values in output') common.add_argument('--check-python', choices=SUPPORTED_PYTHON_VERSIONS, help=argparse.SUPPRESS) test = argparse.ArgumentParser(add_help=False, parents=[common]) test.add_argument('include', metavar='TARGET', nargs='*', help='test the specified target').completer = complete_target test.add_argument('--include', metavar='TARGET', action='append', help='include the specified target').completer = complete_target test.add_argument('--exclude', metavar='TARGET', action='append', help='exclude the specified target').completer = complete_target test.add_argument('--require', metavar='TARGET', action='append', help='require the specified target').completer = complete_target test.add_argument('--coverage', action='store_true', help='analyze code coverage when running tests') test.add_argument('--coverage-label', default='', help='label to include in coverage output file names') test.add_argument('--coverage-check', action='store_true', help='only verify code coverage can be enabled') test.add_argument('--metadata', help=argparse.SUPPRESS) test.add_argument('--base-branch', help='base branch used for change detection') add_changes(test, argparse) add_environments(test) integration = argparse.ArgumentParser(add_help=False, parents=[test]) integration.add_argument('--python', metavar='VERSION', choices=SUPPORTED_PYTHON_VERSIONS + ('default',), help='python version: %s' % ', '.join(SUPPORTED_PYTHON_VERSIONS)) integration.add_argument('--start-at', metavar='TARGET', help='start at the specified target').completer = complete_target integration.add_argument('--start-at-task', metavar='TASK', help='start at the specified task') integration.add_argument('--tags', metavar='TAGS', help='only run plays and tasks tagged with these values') integration.add_argument('--skip-tags', metavar='TAGS', help='only run plays and tasks whose tags do not match these values') integration.add_argument('--diff', action='store_true', help='show diff output') integration.add_argument('--allow-destructive', action='store_true', help='allow destructive tests') integration.add_argument('--allow-root', action='store_true', help='allow tests requiring root when not root') integration.add_argument('--allow-disabled', action='store_true', help='allow tests which have been marked as disabled') integration.add_argument('--allow-unstable', action='store_true', help='allow tests which have been marked as unstable') integration.add_argument('--allow-unstable-changed', action='store_true', help='allow tests which have been marked as unstable when focused changes are detected') integration.add_argument('--allow-unsupported', action='store_true', help='allow tests which have been marked as unsupported') integration.add_argument('--retry-on-error', action='store_true', help='retry failed test with increased verbosity') integration.add_argument('--continue-on-error', action='store_true', help='continue after failed test') integration.add_argument('--debug-strategy', action='store_true', help='run test playbooks using the debug strategy') integration.add_argument('--changed-all-target', metavar='TARGET', default='all', help='target to run when all tests are needed') integration.add_argument('--changed-all-mode', metavar='MODE', choices=('default', 'include', 'exclude'), help='include/exclude behavior with --changed-all-target: %(choices)s') integration.add_argument('--list-targets', action='store_true', help='list matching targets instead of running tests') integration.add_argument('--no-temp-workdir', action='store_true', help='do not run tests from a temporary directory (use only for verifying broken tests)') integration.add_argument('--no-temp-unicode', action='store_true', help='avoid unicode characters in temporary directory (use only for verifying broken tests)') subparsers = parser.add_subparsers(metavar='COMMAND') subparsers.required = True # work-around for python 3 bug which makes subparsers optional posix_integration = subparsers.add_parser('integration', parents=[integration], help='posix integration tests') posix_integration.set_defaults(func=command_posix_integration, targets=walk_posix_integration_targets, config=PosixIntegrationConfig) add_extra_docker_options(posix_integration) add_httptester_options(posix_integration, argparse) network_integration = subparsers.add_parser('network-integration', parents=[integration], help='network integration tests') network_integration.set_defaults(func=command_network_integration, targets=walk_network_integration_targets, config=NetworkIntegrationConfig) add_extra_docker_options(network_integration, integration=False) network_integration.add_argument('--platform', metavar='PLATFORM', action='append', help='network platform/version').completer = complete_network_platform network_integration.add_argument('--platform-collection', type=key_value_type, metavar='PLATFORM=COLLECTION', action='append', help='collection used to test platform').completer = complete_network_platform_collection network_integration.add_argument('--platform-connection', type=key_value_type, metavar='PLATFORM=CONNECTION', action='append', help='connection used to test platform').completer = complete_network_platform_connection network_integration.add_argument('--inventory', metavar='PATH', help='path to inventory used for tests') network_integration.add_argument('--testcase', metavar='TESTCASE', help='limit a test to a specified testcase').completer = complete_network_testcase windows_integration = subparsers.add_parser('windows-integration', parents=[integration], help='windows integration tests') windows_integration.set_defaults(func=command_windows_integration, targets=walk_windows_integration_targets, config=WindowsIntegrationConfig) add_extra_docker_options(windows_integration, integration=False) add_httptester_options(windows_integration, argparse) windows_integration.add_argument('--windows', metavar='VERSION', action='append', help='windows version').completer = complete_windows windows_integration.add_argument('--inventory', metavar='PATH', help='path to inventory used for tests') units = subparsers.add_parser('units', parents=[test], help='unit tests') units.set_defaults(func=command_units, targets=walk_units_targets, config=UnitsConfig) units.add_argument('--python', metavar='VERSION', choices=SUPPORTED_PYTHON_VERSIONS + ('default',), help='python version: %s' % ', '.join(SUPPORTED_PYTHON_VERSIONS)) units.add_argument('--collect-only', action='store_true', help='collect tests but do not execute them') # noinspection PyTypeChecker units.add_argument('--num-workers', type=int, help='number of workers to use (default: auto)') units.add_argument('--requirements-mode', choices=('only', 'skip'), help=argparse.SUPPRESS) add_extra_docker_options(units, integration=False) sanity = subparsers.add_parser('sanity', parents=[test], help='sanity tests') sanity.set_defaults(func=command_sanity, targets=walk_sanity_targets, config=SanityConfig) sanity.add_argument('--test', metavar='TEST', action='append', choices=[test.name for test in sanity_get_tests()], help='tests to run').completer = complete_sanity_test sanity.add_argument('--skip-test', metavar='TEST', action='append', choices=[test.name for test in sanity_get_tests()], help='tests to skip').completer = complete_sanity_test sanity.add_argument('--allow-disabled', action='store_true', help='allow tests to run which are disabled by default') sanity.add_argument('--list-tests', action='store_true', help='list available tests') sanity.add_argument('--python', metavar='VERSION', choices=SUPPORTED_PYTHON_VERSIONS + ('default',), help='python version: %s' % ', '.join(SUPPORTED_PYTHON_VERSIONS)) sanity.add_argument('--enable-optional-errors', action='store_true', help='enable optional errors') add_lint(sanity) add_extra_docker_options(sanity, integration=False) shell = subparsers.add_parser('shell', parents=[common], help='open an interactive shell') shell.add_argument('--python', metavar='VERSION', choices=SUPPORTED_PYTHON_VERSIONS + ('default',), help='python version: %s' % ', '.join(SUPPORTED_PYTHON_VERSIONS)) shell.set_defaults(func=command_shell, config=ShellConfig) shell.add_argument('--raw', action='store_true', help='direct to shell with no setup') add_environments(shell) add_extra_docker_options(shell) add_httptester_options(shell, argparse) coverage_common = argparse.ArgumentParser(add_help=False, parents=[common]) add_environments(coverage_common, isolated_delegation=False) coverage = subparsers.add_parser('coverage', help='code coverage management and reporting') coverage_subparsers = coverage.add_subparsers(metavar='COMMAND') coverage_subparsers.required = True # work-around for python 3 bug which makes subparsers optional add_coverage_analyze(coverage_subparsers, coverage_common) coverage_combine = coverage_subparsers.add_parser('combine', parents=[coverage_common], help='combine coverage data and rewrite remote paths') coverage_combine.set_defaults(func=command_coverage_combine, config=CoverageConfig) add_extra_coverage_options(coverage_combine) coverage_erase = coverage_subparsers.add_parser('erase', parents=[coverage_common], help='erase coverage data files') coverage_erase.set_defaults(func=command_coverage_erase, config=CoverageConfig) coverage_report = coverage_subparsers.add_parser('report', parents=[coverage_common], help='generate console coverage report') coverage_report.set_defaults(func=command_coverage_report, config=CoverageReportConfig) coverage_report.add_argument('--show-missing', action='store_true', help='show line numbers of statements not executed') coverage_report.add_argument('--include', metavar='PAT1,PAT2,...', help='include only files whose paths match one of these ' 'patterns. Accepts shell-style wildcards, which must be ' 'quoted.') coverage_report.add_argument('--omit', metavar='PAT1,PAT2,...', help='omit files whose paths match one of these patterns. ' 'Accepts shell-style wildcards, which must be quoted.') add_extra_coverage_options(coverage_report) coverage_html = coverage_subparsers.add_parser('html', parents=[coverage_common], help='generate html coverage report') coverage_html.set_defaults(func=command_coverage_html, config=CoverageConfig) add_extra_coverage_options(coverage_html) coverage_xml = coverage_subparsers.add_parser('xml', parents=[coverage_common], help='generate xml coverage report') coverage_xml.set_defaults(func=command_coverage_xml, config=CoverageConfig) add_extra_coverage_options(coverage_xml) env = subparsers.add_parser('env', parents=[common], help='show information about the test environment') env.set_defaults(func=command_env, config=EnvConfig) env.add_argument('--show', action='store_true', help='show environment on stdout') env.add_argument('--dump', action='store_true', help='dump environment to disk') env.add_argument('--list-files', action='store_true', help='list files on stdout') # noinspection PyTypeChecker env.add_argument('--timeout', type=int, metavar='MINUTES', help='timeout for future ansible-test commands (0 clears)') if argcomplete: argcomplete.autocomplete(parser, always_complete_options=False, validator=lambda i, k: True) args = parser.parse_args() if args.explain and not args.verbosity: args.verbosity = 1 if args.color == 'yes': args.color = True elif args.color == 'no': args.color = False else: args.color = sys.stdout.isatty() return args def key_value(argparse, value): # type: (argparse_module, str) -> t.Tuple[str, str] """Type parsing and validation for argparse key/value pairs separated by an '=' character.""" parts = value.split('=') if len(parts) != 2: raise argparse.ArgumentTypeError('"%s" must be in the format "key=value"' % value) return parts[0], parts[1] # noinspection PyProtectedMember def add_coverage_analyze(coverage_subparsers, coverage_common): # type: (argparse_module._SubParsersAction, argparse_module.ArgumentParser) -> None """Add the `coverage analyze` subcommand.""" analyze = coverage_subparsers.add_parser( 'analyze', help='analyze collected coverage data', ) analyze_subparsers = analyze.add_subparsers(metavar='COMMAND') analyze_subparsers.required = True # work-around for python 3 bug which makes subparsers optional targets = analyze_subparsers.add_parser( 'targets', help='analyze integration test target coverage', ) targets_subparsers = targets.add_subparsers(metavar='COMMAND') targets_subparsers.required = True # work-around for python 3 bug which makes subparsers optional targets_generate = targets_subparsers.add_parser( 'generate', parents=[coverage_common], help='aggregate coverage by integration test target', ) targets_generate.set_defaults( func=command_coverage_analyze_targets_generate, config=CoverageAnalyzeTargetsGenerateConfig, ) targets_generate.add_argument( 'input_dir', nargs='?', help='directory to read coverage from', ) targets_generate.add_argument( 'output_file', help='output file for aggregated coverage', ) targets_expand = targets_subparsers.add_parser( 'expand', parents=[coverage_common], help='expand target names from integers in aggregated coverage', ) targets_expand.set_defaults( func=command_coverage_analyze_targets_expand, config=CoverageAnalyzeTargetsExpandConfig, ) targets_expand.add_argument( 'input_file', help='input file to read aggregated coverage from', ) targets_expand.add_argument( 'output_file', help='output file to write expanded coverage to', ) targets_filter = targets_subparsers.add_parser( 'filter', parents=[coverage_common], help='filter aggregated coverage data', ) targets_filter.set_defaults( func=command_coverage_analyze_targets_filter, config=CoverageAnalyzeTargetsFilterConfig, ) targets_filter.add_argument( 'input_file', help='input file to read aggregated coverage from', ) targets_filter.add_argument( 'output_file', help='output file to write expanded coverage to', ) targets_filter.add_argument( '--include-target', dest='include_targets', action='append', help='include the specified targets', ) targets_filter.add_argument( '--exclude-target', dest='exclude_targets', action='append', help='exclude the specified targets', ) targets_filter.add_argument( '--include-path', help='include paths matching the given regex', ) targets_filter.add_argument( '--exclude-path', help='exclude paths matching the given regex', ) targets_combine = targets_subparsers.add_parser( 'combine', parents=[coverage_common], help='combine multiple aggregated coverage files', ) targets_combine.set_defaults( func=command_coverage_analyze_targets_combine, config=CoverageAnalyzeTargetsCombineConfig, ) targets_combine.add_argument( 'input_file', nargs='+', help='input file to read aggregated coverage from', ) targets_combine.add_argument( 'output_file', help='output file to write aggregated coverage to', ) targets_missing = targets_subparsers.add_parser( 'missing', parents=[coverage_common], help='identify coverage in one file missing in another', ) targets_missing.set_defaults( func=command_coverage_analyze_targets_missing, config=CoverageAnalyzeTargetsMissingConfig, ) targets_missing.add_argument( 'from_file', help='input file containing aggregated coverage', ) targets_missing.add_argument( 'to_file', help='input file containing aggregated coverage', ) targets_missing.add_argument( 'output_file', help='output file to write aggregated coverage to', ) targets_missing.add_argument( '--only-gaps', action='store_true', help='report only arcs/lines not hit by any target', ) targets_missing.add_argument( '--only-exists', action='store_true', help='limit results to files that exist', ) def add_lint(parser): """ :type parser: argparse.ArgumentParser """ parser.add_argument('--lint', action='store_true', help='write lint output to stdout, everything else stderr') parser.add_argument('--junit', action='store_true', help='write test failures to junit xml files') parser.add_argument('--failure-ok', action='store_true', help='exit successfully on failed tests after saving results') def add_changes(parser, argparse): """ :type parser: argparse.ArgumentParser :type argparse: argparse """ parser.add_argument('--changed', action='store_true', help='limit targets based on changes') changes = parser.add_argument_group(title='change detection arguments') changes.add_argument('--tracked', action='store_true', help=argparse.SUPPRESS) changes.add_argument('--untracked', action='store_true', help='include untracked files') changes.add_argument('--ignore-committed', dest='committed', action='store_false', help='exclude committed files') changes.add_argument('--ignore-staged', dest='staged', action='store_false', help='exclude staged files') changes.add_argument('--ignore-unstaged', dest='unstaged', action='store_false', help='exclude unstaged files') changes.add_argument('--changed-from', metavar='PATH', help=argparse.SUPPRESS) changes.add_argument('--changed-path', metavar='PATH', action='append', help=argparse.SUPPRESS) def add_environments(parser, isolated_delegation=True): """ :type parser: argparse.ArgumentParser :type isolated_delegation: bool """ parser.add_argument('--requirements', action='store_true', help='install command requirements') parser.add_argument('--python-interpreter', metavar='PATH', default=None, help='path to the docker or remote python interpreter') parser.add_argument('--no-pip-check', dest='pip_check', default=True, action='store_false', help='do not run "pip check" to verify requirements') environments = parser.add_mutually_exclusive_group() environments.add_argument('--local', action='store_true', help='run from the local environment') environments.add_argument('--venv', action='store_true', help='run from ansible-test managed virtual environments') venv = parser.add_argument_group(title='venv arguments') venv.add_argument('--venv-system-site-packages', action='store_true', help='enable system site packages') if not isolated_delegation: environments.set_defaults( docker=None, remote=None, remote_stage=None, remote_provider=None, remote_aws_region=None, remote_terminate=None, python_interpreter=None, ) return environments.add_argument('--docker', metavar='IMAGE', nargs='?', default=None, const='default', help='run from a docker container').completer = complete_docker environments.add_argument('--remote', metavar='PLATFORM', default=None, help='run from a remote instance').completer = complete_remote_shell if parser.prog.endswith(' shell') else complete_remote remote = parser.add_argument_group(title='remote arguments') remote.add_argument('--remote-stage', metavar='STAGE', help='remote stage to use: prod, dev', default='prod').completer = complete_remote_stage remote.add_argument('--remote-provider', metavar='PROVIDER', help='remote provider to use: %(choices)s', choices=['default', 'aws', 'azure', 'parallels', 'ibmvpc', 'ibmps'], default='default') remote.add_argument('--remote-endpoint', metavar='ENDPOINT', help='remote provisioning endpoint to use (default: auto)', default=None) remote.add_argument('--remote-aws-region', metavar='REGION', help='remote aws region to use: %(choices)s (default: auto)', choices=sorted(AWS_ENDPOINTS), default=None) remote.add_argument('--remote-terminate', metavar='WHEN', help='terminate remote instance: %(choices)s (default: %(default)s)', choices=['never', 'always', 'success'], default='never') def add_extra_coverage_options(parser): """ :type parser: argparse.ArgumentParser """ parser.add_argument('--group-by', metavar='GROUP', action='append', choices=COVERAGE_GROUPS, help='group output by: %s' % ', '.join(COVERAGE_GROUPS)) parser.add_argument('--all', action='store_true', help='include all python/powershell source files') parser.add_argument('--stub', action='store_true', help='generate empty report of all python/powershell source files') def add_httptester_options(parser, argparse): """ :type parser: argparse.ArgumentParser :type argparse: argparse """ group = parser.add_mutually_exclusive_group() group.add_argument('--httptester', metavar='IMAGE', default='quay.io/ansible/http-test-container:1.0.0', help='docker image to use for the httptester container') group.add_argument('--disable-httptester', dest='httptester', action='store_const', const='', help='do not use the httptester container') parser.add_argument('--inject-httptester', action='store_true', help=argparse.SUPPRESS) # internal use only def add_extra_docker_options(parser, integration=True): """ :type parser: argparse.ArgumentParser :type integration: bool """ docker = parser.add_argument_group(title='docker arguments') docker.add_argument('--docker-no-pull', action='store_false', dest='docker_pull', help='do not explicitly pull the latest docker images') if data_context().content.is_ansible: docker.add_argument('--docker-keep-git', action='store_true', help='transfer git related files into the docker container') else: docker.set_defaults( docker_keep_git=False, ) docker.add_argument('--docker-seccomp', metavar='SC', choices=('default', 'unconfined'), default=None, help='set seccomp confinement for the test container: %(choices)s') docker.add_argument('--docker-terminate', metavar='WHEN', help='terminate docker container: %(choices)s (default: %(default)s)', choices=['never', 'always', 'success'], default='always') if not integration: return docker.add_argument('--docker-privileged', action='store_true', help='run docker container in privileged mode') # noinspection PyTypeChecker docker.add_argument('--docker-memory', help='memory limit for docker in bytes', type=int) # noinspection PyUnusedLocal def complete_remote_stage(prefix, parsed_args, **_): # pylint: disable=unused-argument """ :type prefix: unicode :type parsed_args: any :rtype: list[str] """ return [stage for stage in ('prod', 'dev') if stage.startswith(prefix)] def complete_target(prefix, parsed_args, **_): """ :type prefix: unicode :type parsed_args: any :rtype: list[str] """ return find_target_completion(parsed_args.targets, prefix) # noinspection PyUnusedLocal def complete_remote(prefix, parsed_args, **_): """ :type prefix: unicode :type parsed_args: any :rtype: list[str] """ del parsed_args images = sorted(get_remote_completion().keys()) return [i for i in images if i.startswith(prefix)] # noinspection PyUnusedLocal def complete_remote_shell(prefix, parsed_args, **_): """ :type prefix: unicode :type parsed_args: any :rtype: list[str] """ del parsed_args images = sorted(get_remote_completion().keys()) # 2008 doesn't support SSH so we do not add to the list of valid images windows_completion_path = os.path.join(ANSIBLE_TEST_DATA_ROOT, 'completion', 'windows.txt') images.extend(["windows/%s" % i for i in read_lines_without_comments(windows_completion_path, remove_blank_lines=True) if i != '2008']) return [i for i in images if i.startswith(prefix)] # noinspection PyUnusedLocal def complete_docker(prefix, parsed_args, **_): """ :type prefix: unicode :type parsed_args: any :rtype: list[str] """ del parsed_args images = sorted(get_docker_completion().keys()) return [i for i in images if i.startswith(prefix)] def complete_windows(prefix, parsed_args, **_): """ :type prefix: unicode :type parsed_args: any :rtype: list[str] """ images = read_lines_without_comments(os.path.join(ANSIBLE_TEST_DATA_ROOT, 'completion', 'windows.txt'), remove_blank_lines=True) return [i for i in images if i.startswith(prefix) and (not parsed_args.windows or i not in parsed_args.windows)] def complete_network_platform(prefix, parsed_args, **_): """ :type prefix: unicode :type parsed_args: any :rtype: list[str] """ images = sorted(get_network_completion()) return [i for i in images if i.startswith(prefix) and (not parsed_args.platform or i not in parsed_args.platform)] def complete_network_platform_collection(prefix, parsed_args, **_): """ :type prefix: unicode :type parsed_args: any :rtype: list[str] """ left = prefix.split('=')[0] images = sorted(set(image.split('/')[0] for image in get_network_completion())) return [i + '=' for i in images if i.startswith(left) and (not parsed_args.platform_collection or i not in [x[0] for x in parsed_args.platform_collection])] def complete_network_platform_connection(prefix, parsed_args, **_): """ :type prefix: unicode :type parsed_args: any :rtype: list[str] """ left = prefix.split('=')[0] images = sorted(set(image.split('/')[0] for image in get_network_completion())) return [i + '=' for i in images if i.startswith(left) and (not parsed_args.platform_connection or i not in [x[0] for x in parsed_args.platform_connection])] def complete_network_testcase(prefix, parsed_args, **_): """ :type prefix: unicode :type parsed_args: any :rtype: list[str] """ testcases = [] # since testcases are module specific, don't autocomplete if more than one # module is specidied if len(parsed_args.include) != 1: return [] test_dir = os.path.join(data_context().content.integration_targets_path, parsed_args.include[0], 'tests') connection_dirs = data_context().content.get_dirs(test_dir) for connection_dir in connection_dirs: for testcase in [os.path.basename(path) for path in data_context().content.get_files(connection_dir)]: if testcase.startswith(prefix): testcases.append(testcase.split('.')[0]) return testcases # noinspection PyUnusedLocal def complete_sanity_test(prefix, parsed_args, **_): """ :type prefix: unicode :type parsed_args: any :rtype: list[str] """ del parsed_args tests = sorted(test.name for test in sanity_get_tests()) return [i for i in tests if i.startswith(prefix)]
closed
ansible/ansible
https://github.com/ansible/ansible
71,443
ansible-test coverage leads to 'Namespace' object has no attribute 'remote_endpoint'
##### SUMMARY We are running `ansible-test coverage` from the devel branch of Ansible on the `community.kubernetes` collection. It seems like sometime between yesterday morning and today, something in Ansible changed that is causing the error: ``` 'Namespace' object has no attribute 'remote_endpoint' ``` See: https://github.com/ansible-collections/community.kubernetes/runs/1025104583?check_suite_focus=true#step:6:17 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible-test ##### ANSIBLE VERSION ```paste below devel (installed via pip) ``` ##### CONFIGURATION ```paste below N/A ``` ##### OS / ENVIRONMENT Ubuntu 20.04 (GitHub Actions CI) ##### STEPS TO REPRODUCE ``` # Inside the collection.kubernetes directory: pip install https://github.com/ansible/ansible/archive/devel.tar.gz ansible-test integration --docker -v --color --retry-on-error --python 3.6 --continue-on-error --diff --coverage ansible-test coverage xml -v --requirements --group-by command --group-by version ``` ##### EXPECTED RESULTS Coverage report is generated. ##### ACTUAL RESULTS ``` Run ansible-test coverage xml -v --requirements --group-by command --group-by version ansible-test coverage xml -v --requirements --group-by command --group-by version shell: /bin/bash -e {0} env: pythonLocation: /opt/hostedtoolcache/Python/3.6.11/x64 Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.6.11/x64/bin/ansible-test", line 28, in <module> main() File "/opt/hostedtoolcache/Python/3.6.11/x64/bin/ansible-test", line 24, in main cli_main() File "/opt/hostedtoolcache/Python/3.6.11/x64/lib/python3.6/site-packages/ansible_test/_internal/cli.py", line 159, in main config = args.config(args) # type: CommonConfig File "/opt/hostedtoolcache/Python/3.6.11/x64/lib/python3.6/site-packages/ansible_test/_internal/coverage/__init__.py", line 59, in __init__ super(CoverageConfig, self).__init__(args, 'coverage') File "/opt/hostedtoolcache/Python/3.6.11/x64/lib/python3.6/site-packages/ansible_test/_internal/config.py", line 99, in __init__ self.remote_endpoint = args.remote_endpoint # type: t.Optional[str] AttributeError: 'Namespace' object has no attribute 'remote_endpoint' ```
https://github.com/ansible/ansible/issues/71443
https://github.com/ansible/ansible/pull/71446
72a7cb4a2c3036da5e3abb32c50713a262d0c063
f5b6df14ab2e691f5f059fff0fcf59449132549f
2020-08-25T15:00:33Z
python
2020-08-26T04:23:44Z
test/lib/ansible_test/_internal/coverage/__init__.py
"""Common logic for the coverage subcommand.""" from __future__ import (absolute_import, division, print_function) __metaclass__ = type import os import re from .. import types as t from ..encoding import ( to_bytes, ) from ..io import ( open_binary_file, read_json_file, ) from ..util import ( ApplicationError, common_environment, display, ANSIBLE_TEST_DATA_ROOT, ) from ..util_common import ( intercept_command, ResultType, ) from ..config import ( EnvironmentConfig, ) from ..executor import ( Delegate, install_command_requirements, ) from .. target import ( walk_module_targets, ) from ..data import ( data_context, ) if t.TYPE_CHECKING: import coverage as coverage_module COVERAGE_GROUPS = ('command', 'target', 'environment', 'version') COVERAGE_CONFIG_PATH = os.path.join(ANSIBLE_TEST_DATA_ROOT, 'coveragerc') COVERAGE_OUTPUT_FILE_NAME = 'coverage' class CoverageConfig(EnvironmentConfig): """Configuration for the coverage command.""" def __init__(self, args): # type: (t.Any) -> None super(CoverageConfig, self).__init__(args, 'coverage') self.group_by = frozenset(args.group_by) if 'group_by' in args and args.group_by else set() # type: t.FrozenSet[str] self.all = args.all if 'all' in args else False # type: bool self.stub = args.stub if 'stub' in args else False # type: bool self.coverage = False # temporary work-around to support intercept_command in cover.py def initialize_coverage(args): # type: (CoverageConfig) -> coverage_module """Delegate execution if requested, install requirements, then import and return the coverage module. Raises an exception if coverage is not available.""" if args.delegate: raise Delegate() if args.requirements: install_command_requirements(args) try: import coverage except ImportError: coverage = None if not coverage: raise ApplicationError('You must install the "coverage" python module to use this command.') coverage_version_string = coverage.__version__ coverage_version = tuple(int(v) for v in coverage_version_string.split('.')) min_version = (4, 2) max_version = (5, 0) supported_version = True recommended_version = '4.5.4' if coverage_version < min_version or coverage_version >= max_version: supported_version = False if not supported_version: raise ApplicationError('Version %s of "coverage" is not supported. Version %s is known to work and is recommended.' % ( coverage_version_string, recommended_version)) return coverage def run_coverage(args, output_file, command, cmd): # type: (CoverageConfig, str, str, t.List[str]) -> None """Run the coverage cli tool with the specified options.""" env = common_environment() env.update(dict(COVERAGE_FILE=output_file)) cmd = ['python', '-m', 'coverage', command, '--rcfile', COVERAGE_CONFIG_PATH] + cmd intercept_command(args, target_name='coverage', env=env, cmd=cmd, disable_coverage=True) def get_python_coverage_files(path=None): # type: (t.Optional[str]) -> t.List[str] """Return the list of Python coverage file paths.""" return get_coverage_files('python', path) def get_powershell_coverage_files(path=None): # type: (t.Optional[str]) -> t.List[str] """Return the list of PowerShell coverage file paths.""" return get_coverage_files('powershell', path) def get_coverage_files(language, path=None): # type: (str, t.Optional[str]) -> t.List[str] """Return the list of coverage file paths for the given language.""" coverage_dir = path or ResultType.COVERAGE.path coverage_files = [os.path.join(coverage_dir, f) for f in os.listdir(coverage_dir) if '=coverage.' in f and '=%s' % language in f] return coverage_files def get_collection_path_regexes(): # type: () -> t.Tuple[t.Optional[t.Pattern], t.Optional[t.Pattern]] """Return a pair of regexes used for identifying and manipulating collection paths.""" if data_context().content.collection: collection_search_re = re.compile(r'/%s/' % data_context().content.collection.directory) collection_sub_re = re.compile(r'^.*?/%s/' % data_context().content.collection.directory) else: collection_search_re = None collection_sub_re = None return collection_search_re, collection_sub_re def get_python_modules(): # type: () -> t.Dict[str, str] """Return a dictionary of Ansible module names and their paths.""" return dict((target.module, target.path) for target in list(walk_module_targets()) if target.path.endswith('.py')) def enumerate_python_arcs( path, # type: str coverage, # type: coverage_module modules, # type: t.Dict[str, str] collection_search_re, # type: t.Optional[t.Pattern] collection_sub_re, # type: t.Optional[t.Pattern] ): # type: (...) -> t.Generator[t.Tuple[str, t.Set[t.Tuple[int, int]]]] """Enumerate Python code coverage arcs in the given file.""" if os.path.getsize(path) == 0: display.warning('Empty coverage file: %s' % path, verbosity=2) return original = coverage.CoverageData() try: original.read_file(path) except Exception as ex: # pylint: disable=locally-disabled, broad-except with open_binary_file(path) as file: header = file.read(6) if header == b'SQLite': display.error('File created by "coverage" 5.0+: %s' % os.path.relpath(path)) else: display.error(u'%s' % ex) return for filename in original.measured_files(): arcs = original.arcs(filename) if not arcs: # This is most likely due to using an unsupported version of coverage. display.warning('No arcs found for "%s" in coverage file: %s' % (filename, path)) continue filename = sanitize_filename(filename, modules=modules, collection_search_re=collection_search_re, collection_sub_re=collection_sub_re) if not filename: continue yield filename, set(arcs) def enumerate_powershell_lines( path, # type: str collection_search_re, # type: t.Optional[t.Pattern] collection_sub_re, # type: t.Optional[t.Pattern] ): # type: (...) -> t.Generator[t.Tuple[str, t.Dict[int, int]]] """Enumerate PowerShell code coverage lines in the given file.""" if os.path.getsize(path) == 0: display.warning('Empty coverage file: %s' % path, verbosity=2) return try: coverage_run = read_json_file(path) except Exception as ex: # pylint: disable=locally-disabled, broad-except display.error(u'%s' % ex) return for filename, hits in coverage_run.items(): filename = sanitize_filename(filename, collection_search_re=collection_search_re, collection_sub_re=collection_sub_re) if not filename: continue # PowerShell unpacks arrays if there's only a single entry so this is a defensive check on that if not isinstance(hits, list): hits = [hits] hits = dict((hit['Line'], hit['HitCount']) for hit in hits if hit) yield filename, hits def sanitize_filename( filename, # type: str modules=None, # type: t.Optional[t.Dict[str, str]] collection_search_re=None, # type: t.Optional[t.Pattern] collection_sub_re=None, # type: t.Optional[t.Pattern] ): # type: (...) -> t.Optional[str] """Convert the given code coverage path to a local absolute path and return its, or None if the path is not valid.""" ansible_path = os.path.abspath('lib/ansible/') + '/' root_path = data_context().content.root + '/' integration_temp_path = os.path.sep + os.path.join(ResultType.TMP.relative_path, 'integration') + os.path.sep if modules is None: modules = {} if '/ansible_modlib.zip/ansible/' in filename: # Rewrite the module_utils path from the remote host to match the controller. Ansible 2.6 and earlier. new_name = re.sub('^.*/ansible_modlib.zip/ansible/', ansible_path, filename) display.info('%s -> %s' % (filename, new_name), verbosity=3) filename = new_name elif collection_search_re and collection_search_re.search(filename): new_name = os.path.abspath(collection_sub_re.sub('', filename)) display.info('%s -> %s' % (filename, new_name), verbosity=3) filename = new_name elif re.search(r'/ansible_[^/]+_payload\.zip/ansible/', filename): # Rewrite the module_utils path from the remote host to match the controller. Ansible 2.7 and later. new_name = re.sub(r'^.*/ansible_[^/]+_payload\.zip/ansible/', ansible_path, filename) display.info('%s -> %s' % (filename, new_name), verbosity=3) filename = new_name elif '/ansible_module_' in filename: # Rewrite the module path from the remote host to match the controller. Ansible 2.6 and earlier. module_name = re.sub('^.*/ansible_module_(?P<module>.*).py$', '\\g<module>', filename) if module_name not in modules: display.warning('Skipping coverage of unknown module: %s' % module_name) return None new_name = os.path.abspath(modules[module_name]) display.info('%s -> %s' % (filename, new_name), verbosity=3) filename = new_name elif re.search(r'/ansible_[^/]+_payload(_[^/]+|\.zip)/__main__\.py$', filename): # Rewrite the module path from the remote host to match the controller. Ansible 2.7 and later. # AnsiballZ versions using zipimporter will match the `.zip` portion of the regex. # AnsiballZ versions not using zipimporter will match the `_[^/]+` portion of the regex. module_name = re.sub(r'^.*/ansible_(?P<module>[^/]+)_payload(_[^/]+|\.zip)/__main__\.py$', '\\g<module>', filename).rstrip('_') if module_name not in modules: display.warning('Skipping coverage of unknown module: %s' % module_name) return None new_name = os.path.abspath(modules[module_name]) display.info('%s -> %s' % (filename, new_name), verbosity=3) filename = new_name elif re.search('^(/.*?)?/root/ansible/', filename): # Rewrite the path of code running on a remote host or in a docker container as root. new_name = re.sub('^(/.*?)?/root/ansible/', root_path, filename) display.info('%s -> %s' % (filename, new_name), verbosity=3) filename = new_name elif integration_temp_path in filename: # Rewrite the path of code running from an integration test temporary directory. new_name = re.sub(r'^.*' + re.escape(integration_temp_path) + '[^/]+/', root_path, filename) display.info('%s -> %s' % (filename, new_name), verbosity=3) filename = new_name return filename class PathChecker: """Checks code coverage paths to verify they are valid and reports on the findings.""" def __init__(self, args, collection_search_re=None): # type: (CoverageConfig, t.Optional[t.Pattern]) -> None self.args = args self.collection_search_re = collection_search_re self.invalid_paths = [] self.invalid_path_chars = 0 def check_path(self, path): # type: (str) -> bool """Return True if the given coverage path is valid, otherwise display a warning and return False.""" if os.path.isfile(to_bytes(path)): return True if self.collection_search_re and self.collection_search_re.search(path) and os.path.basename(path) == '__init__.py': # the collection loader uses implicit namespace packages, so __init__.py does not need to exist on disk # coverage is still reported for these non-existent files, but warnings are not needed return False self.invalid_paths.append(path) self.invalid_path_chars += len(path) if self.args.verbosity > 1: display.warning('Invalid coverage path: %s' % path) return False def report(self): # type: () -> None """Display a warning regarding invalid paths if any were found.""" if self.invalid_paths: display.warning('Ignored %d characters from %d invalid coverage path(s).' % (self.invalid_path_chars, len(self.invalid_paths)))