status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,092 |
cache plugins vs collections (jsonfile)
|
##### SUMMARY
From @DouglasHeriot in https://github.com/ansible/ansible/issues/69075#issuecomment-617563015
Another issue with the inventory docs: example of enabling caching uses `jsonfile`.
https://docs.ansible.com/ansible/devel/plugins/cache.html
It says that:
> You can use any cache plugin shipped with Ansible to cache inventory, but you cannot use a cache plugin inside a collection
Well, `jsonfile` has been moved to the `community.general` collection so no longer works. The only cache plugin part of ansible base is `memory`, which is a bit useless for dynamic inventories.
Will there be a way in 2.10 to use cache plugins from a collection?
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
cache
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/69092
|
https://github.com/ansible/ansible/pull/69100
|
f6860a7a89c30a3dce1ab54d4964dc188cba82c2
|
34458f3569be1b8592e0c433ce09c6add86893da
| 2020-04-22T09:25:50Z |
python
| 2020-05-05T20:10:57Z |
docs/docsite/rst/dev_guide/developing_plugins.rst
|
.. _developing_plugins:
.. _plugin_guidelines:
******************
Developing plugins
******************
.. contents::
:local:
Plugins augment Ansible's core functionality with logic and features that are accessible to all modules. Ansible ships with a number of handy plugins, and you can easily write your own. All plugins must:
* be written in Python
* raise errors
* return strings in unicode
* conform to Ansible's configuration and documentation standards
Once you've reviewed these general guidelines, you can skip to the particular type of plugin you want to develop.
Writing plugins in Python
=========================
You must write your plugin in Python so it can be loaded by the ``PluginLoader`` and returned as a Python object that any module can use. Since your plugin will execute on the controller, you must write it in a :ref:`compatible version of Python <control_node_requirements>`.
Raising errors
==============
You should return errors encountered during plugin execution by raising ``AnsibleError()`` or a similar class with a message describing the error. When wrapping other exceptions into error messages, you should always use the ``to_native`` Ansible function to ensure proper string compatibility across Python versions:
.. code-block:: python
from ansible.module_utils._text import to_native
try:
cause_an_exception()
except Exception as e:
raise AnsibleError('Something happened, this was original exception: %s' % to_native(e))
Check the different `AnsibleError objects <https://github.com/ansible/ansible/blob/devel/lib/ansible/errors/__init__.py>`_ and see which one applies best to your situation.
String encoding
===============
You must convert any strings returned by your plugin into Python's unicode type. Converting to unicode ensures that these strings can run through Jinja2. To convert strings:
.. code-block:: python
from ansible.module_utils._text import to_text
result_string = to_text(result_string)
Plugin configuration & documentation standards
==============================================
To define configurable options for your plugin, describe them in the ``DOCUMENTATION`` section of the python file. Callback and connection plugins have declared configuration requirements this way since Ansible version 2.4; most plugin types now do the same. This approach ensures that the documentation of your plugin's options will always be correct and up-to-date. To add a configurable option to your plugin, define it in this format:
.. code-block:: yaml
options:
option_name:
description: describe this config option
default: default value for this config option
env:
- name: NAME_OF_ENV_VAR
ini:
- section: section_of_ansible.cfg_where_this_config_option_is_defined
key: key_used_in_ansible.cfg
required: True/False
type: boolean/float/integer/list/none/path/pathlist/pathspec/string/tmppath
version_added: X.x
To access the configuration settings in your plugin, use ``self.get_option(<option_name>)``. For most plugin types, the controller pre-populates the settings. If you need to populate settings explicitly, use a ``self.set_options()`` call.
Plugins that support embedded documentation (see :ref:`ansible-doc` for the list) must include well-formed doc strings to be considered for merge into the Ansible repo. If you inherit from a plugin, you must document the options it takes, either via a documentation fragment or as a copy. See :ref:`module_documenting` for more information on correct documentation. Thorough documentation is a good idea even if you're developing a plugin for local use.
Developing particular plugin types
==================================
.. _developing_actions:
Action plugins
--------------
Action plugins let you integrate local processing and local data with module functionality.
To create an action plugin, create a new class with the Base(ActionBase) class as the parent:
.. code-block:: python
from ansible.plugins.action import ActionBase
class ActionModule(ActionBase):
pass
From there, execute the module using the ``_execute_module`` method to call the original module.
After successful execution of the module, you can modify the module return data.
.. code-block:: python
module_return = self._execute_module(module_name='<NAME_OF_MODULE>',
module_args=module_args,
task_vars=task_vars, tmp=tmp)
For example, if you wanted to check the time difference between your Ansible controller and your target machine(s), you could write an action plugin to check the local time and compare it to the return data from Ansible's ``setup`` module:
.. code-block:: python
#!/usr/bin/python
# Make coding more python3-ish, this is required for contributions to Ansible
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.plugins.action import ActionBase
from datetime import datetime
class ActionModule(ActionBase):
def run(self, tmp=None, task_vars=None):
super(ActionModule, self).run(tmp, task_vars)
module_args = self._task.args.copy()
module_return = self._execute_module(module_name='setup',
module_args=module_args,
task_vars=task_vars, tmp=tmp)
ret = dict()
remote_date = None
if not module_return.get('failed'):
for key, value in module_return['ansible_facts'].items():
if key == 'ansible_date_time':
remote_date = value['iso8601']
if remote_date:
remote_date_obj = datetime.strptime(remote_date, '%Y-%m-%dT%H:%M:%SZ')
time_delta = datetime.now() - remote_date_obj
ret['delta_seconds'] = time_delta.seconds
ret['delta_days'] = time_delta.days
ret['delta_microseconds'] = time_delta.microseconds
return dict(ansible_facts=dict(ret))
This code checks the time on the controller, captures the date and time for the remote machine using the ``setup`` module, and calculates the difference between the captured time and
the local time, returning the time delta in days, seconds and microseconds.
For practical examples of action plugins,
see the source code for the `action plugins included with Ansible Core <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/action>`_
.. _developing_cache_plugins:
Cache plugins
-------------
Cache plugins store gathered facts and data retrieved by inventory plugins. Only fact caching is currently supported by cache plugins in collections.
Import cache plugins using the cache_loader so you can use ``self.set_options()`` and ``self.get_option(<option_name>)``. If you import a cache plugin directly in the code base, you can only access options via ``ansible.constants``, and you break the cache plugin's ability to be used by an inventory plugin.
.. code-block:: python
from ansible.plugins.loader import cache_loader
[...]
plugin = cache_loader.get('custom_cache', **cache_kwargs)
There are two base classes for cache plugins, ``BaseCacheModule`` for database-backed caches, and ``BaseCacheFileModule`` for file-backed caches.
To create a cache plugin, start by creating a new ``CacheModule`` class with the appropriate base class. If you're creating a plugin using an ``__init__`` method you should initialize the base class with any provided args and kwargs to be compatible with inventory plugin cache options. The base class calls ``self.set_options(direct=kwargs)``. After the base class ``__init__`` method is called ``self.get_option(<option_name>)`` should be used to access cache options.
New cache plugins should take the options ``_uri``, ``_prefix``, and ``_timeout`` to be consistent with existing cache plugins.
.. code-block:: python
from ansible.plugins.cache import BaseCacheModule
class CacheModule(BaseCacheModule):
def __init__(self, *args, **kwargs):
super(CacheModule, self).__init__(*args, **kwargs)
self._connection = self.get_option('_uri')
self._prefix = self.get_option('_prefix')
self._timeout = self.get_option('_timeout')
If you use the ``BaseCacheModule``, you must implement the methods ``get``, ``contains``, ``keys``, ``set``, ``delete``, ``flush``, and ``copy``. The ``contains`` method should return a boolean that indicates if the key exists and has not expired. Unlike file-based caches, the ``get`` method does not raise a KeyError if the cache has expired.
If you use the ``BaseFileCacheModule``, you must implement ``_load`` and ``_dump`` methods that will be called from the base class methods ``get`` and ``set``.
If your cache plugin stores JSON, use ``AnsibleJSONEncoder`` in the ``_dump`` or ``set`` method and ``AnsibleJSONDecoder`` in the ``_load`` or ``get`` method.
For example cache plugins, see the source code for the `cache plugins included with Ansible Core <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/cache>`_.
.. _developing_callbacks:
Callback plugins
----------------
Callback plugins add new behaviors to Ansible when responding to events. By default, callback plugins control most of the output you see when running the command line programs.
To create a callback plugin, create a new class with the Base(Callbacks) class as the parent:
.. code-block:: python
from ansible.plugins.callback import CallbackBase
class CallbackModule(CallbackBase):
pass
From there, override the specific methods from the CallbackBase that you want to provide a callback for.
For plugins intended for use with Ansible version 2.0 and later, you should only override methods that start with ``v2``.
For a complete list of methods that you can override, please see ``__init__.py`` in the
`lib/ansible/plugins/callback <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/callback>`_ directory.
The following is a modified example of how Ansible's timer plugin is implemented,
but with an extra option so you can see how configuration works in Ansible version 2.4 and later:
.. code-block:: python
# Make coding more python3-ish, this is required for contributions to Ansible
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
# not only visible to ansible-doc, it also 'declares' the options the plugin requires and how to configure them.
DOCUMENTATION = '''
callback: timer
callback_type: aggregate
requirements:
- whitelist in configuration
short_description: Adds time to play stats
version_added: "2.0"
description:
- This callback just adds total play duration to the play stats.
options:
format_string:
description: format of the string shown to user at play end
ini:
- section: callback_timer
key: format_string
env:
- name: ANSIBLE_CALLBACK_TIMER_FORMAT
default: "Playbook run took %s days, %s hours, %s minutes, %s seconds"
'''
from datetime import datetime
from ansible.plugins.callback import CallbackBase
class CallbackModule(CallbackBase):
"""
This callback module tells you how long your plays ran for.
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'aggregate'
CALLBACK_NAME = 'namespace.collection_name.timer'
# only needed if you ship it and don't want to enable by default
CALLBACK_NEEDS_WHITELIST = True
def __init__(self):
# make sure the expected objects are present, calling the base's __init__
super(CallbackModule, self).__init__()
# start the timer when the plugin is loaded, the first play should start a few milliseconds after.
self.start_time = datetime.now()
def _days_hours_minutes_seconds(self, runtime):
''' internal helper method for this callback '''
minutes = (runtime.seconds // 60) % 60
r_seconds = runtime.seconds - (minutes * 60)
return runtime.days, runtime.seconds // 3600, minutes, r_seconds
# this is only event we care about for display, when the play shows its summary stats; the rest are ignored by the base class
def v2_playbook_on_stats(self, stats):
end_time = datetime.now()
runtime = end_time - self.start_time
# Shows the usage of a config option declared in the DOCUMENTATION variable. Ansible will have set it when it loads the plugin.
# Also note the use of the display object to print to screen. This is available to all callbacks, and you should use this over printing yourself
self._display.display(self._plugin_options['format_string'] % (self._days_hours_minutes_seconds(runtime)))
Note that the ``CALLBACK_VERSION`` and ``CALLBACK_NAME`` definitions are required for properly functioning plugins for Ansible version 2.0 and later. ``CALLBACK_TYPE`` is mostly needed to distinguish 'stdout' plugins from the rest, since you can only load one plugin that writes to stdout.
For example callback plugins, see the source code for the `callback plugins included with Ansible Core <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/callback>`_
.. _developing_connection_plugins:
Connection plugins
------------------
Connection plugins allow Ansible to connect to the target hosts so it can execute tasks on them. Ansible ships with many connection plugins, but only one can be used per host at a time. The most commonly used connection plugins are the ``paramiko`` SSH, native ssh (just called ``ssh``), and ``local`` connection types. All of these can be used in playbooks and with ``/usr/bin/ansible`` to connect to remote machines.
Ansible version 2.1 introduced the ``smart`` connection plugin. The ``smart`` connection type allows Ansible to automatically select either the ``paramiko`` or ``openssh`` connection plugin based on system capabilities, or the ``ssh`` connection plugin if OpenSSH supports ControlPersist.
To create a new connection plugin (for example, to support SNMP, Message bus, or other transports), copy the format of one of the existing connection plugins and drop it into ``connection`` directory on your :ref:`local plugin path <local_plugins>`.
For example connection plugins, see the source code for the `connection plugins included with Ansible Core <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/connection>`_.
.. _developing_filter_plugins:
Filter plugins
--------------
Filter plugins manipulate data. They are a feature of Jinja2 and are also available in Jinja2 templates used by the ``template`` module. As with all plugins, they can be easily extended, but instead of having a file for each one you can have several per file. Most of the filter plugins shipped with Ansible reside in a ``core.py``.
Filter plugins do not use the standard configuration and documentation system described above.
For example filter plugins, see the source code for the `filter plugins included with Ansible Core <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/filter>`_.
.. _developing_inventory_plugins:
Inventory plugins
-----------------
Inventory plugins parse inventory sources and form an in-memory representation of the inventory. Inventory plugins were added in Ansible version 2.4.
You can see the details for inventory plugins in the :ref:`developing_inventory` page.
.. _developing_lookup_plugins:
Lookup plugins
--------------
Lookup plugins pull in data from external data stores. Lookup plugins can be used within playbooks both for looping --- playbook language constructs like ``with_fileglob`` and ``with_items`` are implemented via lookup plugins --- and to return values into a variable or parameter.
Lookup plugins are very flexible, allowing you to retrieve and return any type of data. When writing lookup plugins, always return data of a consistent type that can be easily consumed in a playbook. Avoid parameters that change the returned data type. If there is a need to return a single value sometimes and a complex dictionary other times, write two different lookup plugins.
Ansible includes many :ref:`filters <playbooks_filters>` which can be used to manipulate the data returned by a lookup plugin. Sometimes it makes sense to do the filtering inside the lookup plugin, other times it is better to return results that can be filtered in the playbook. Keep in mind how the data will be referenced when determining the appropriate level of filtering to be done inside the lookup plugin.
Here's a simple lookup plugin implementation --- this lookup returns the contents of a text file as a variable:
.. code-block:: python
# python 3 headers, required if submitting to Ansible
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
lookup: file
author: Daniel Hokka Zakrisson <[email protected]>
version_added: "0.9"
short_description: read file contents
description:
- This lookup returns the contents from a file on the Ansible controller's file system.
options:
_terms:
description: path(s) of files to read
required: True
notes:
- if read in variable context, the file can be interpreted as YAML if the content is valid to the parser.
- this lookup does not understand globing --- use the fileglob lookup instead.
"""
from ansible.errors import AnsibleError, AnsibleParserError
from ansible.plugins.lookup import LookupBase
from ansible.utils.display import Display
display = Display()
class LookupModule(LookupBase):
def run(self, terms, variables=None, **kwargs):
# lookups in general are expected to both take a list as input and output a list
# this is done so they work with the looping construct 'with_'.
ret = []
for term in terms:
display.debug("File lookup term: %s" % term)
# Find the file in the expected search path, using a class method
# that implements the 'expected' search path for Ansible plugins.
lookupfile = self.find_file_in_search_path(variables, 'files', term)
# Don't use print or your own logging, the display class
# takes care of it in a unified way.
display.vvvv(u"File lookup using %s as file" % lookupfile)
try:
if lookupfile:
contents, show_data = self._loader._get_file_contents(lookupfile)
ret.append(contents.rstrip())
else:
# Always use ansible error classes to throw 'final' exceptions,
# so the Ansible engine will know how to deal with them.
# The Parser error indicates invalid options passed
raise AnsibleParserError()
except AnsibleParserError:
raise AnsibleError("could not locate file in lookup: %s" % term)
return ret
The following is an example of how this lookup is called::
---
- hosts: all
vars:
contents: "{{ lookup('namespace.collection_name.file', '/etc/foo.txt') }}"
tasks:
- debug:
msg: the value of foo.txt is {{ contents }} as seen today {{ lookup('pipe', 'date +"%Y-%m-%d"') }}
For example lookup plugins, see the source code for the `lookup plugins included with Ansible Core <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/lookup>`_.
For more usage examples of lookup plugins, see :ref:`Using Lookups<playbooks_lookups>`.
.. _developing_test_plugins:
Test plugins
------------
Test plugins verify data. They are a feature of Jinja2 and are also available in Jinja2 templates used by the ``template`` module. As with all plugins, they can be easily extended, but instead of having a file for each one you can have several per file. Most of the test plugins shipped with Ansible reside in a ``core.py``. These are specially useful in conjunction with some filter plugins like ``map`` and ``select``; they are also available for conditional directives like ``when:``.
Test plugins do not use the standard configuration and documentation system described above.
For example test plugins, see the source code for the `test plugins included with Ansible Core <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/test>`_.
.. _developing_vars_plugins:
Vars plugins
------------
Vars plugins inject additional variable data into Ansible runs that did not come from an inventory source, playbook, or command line. Playbook constructs like 'host_vars' and 'group_vars' work using vars plugins.
Vars plugins were partially implemented in Ansible 2.0 and rewritten to be fully implemented starting with Ansible 2.4. Vars plugins are unsupported by collections.
Older plugins used a ``run`` method as their main body/work:
.. code-block:: python
def run(self, name, vault_password=None):
pass # your code goes here
Ansible 2.0 did not pass passwords to older plugins, so vaults were unavailable.
Most of the work now happens in the ``get_vars`` method which is called from the VariableManager when needed.
.. code-block:: python
def get_vars(self, loader, path, entities):
pass # your code goes here
The parameters are:
* loader: Ansible's DataLoader. The DataLoader can read files, auto-load JSON/YAML and decrypt vaulted data, and cache read files.
* path: this is 'directory data' for every inventory source and the current play's playbook directory, so they can search for data in reference to them. ``get_vars`` will be called at least once per available path.
* entities: these are host or group names that are pertinent to the variables needed. The plugin will get called once for hosts and again for groups.
This ``get_vars`` method just needs to return a dictionary structure with the variables.
Since Ansible version 2.4, vars plugins only execute as needed when preparing to execute a task. This avoids the costly 'always execute' behavior that occurred during inventory construction in older versions of Ansible. Since Ansible version 2.10, vars plugin execution can be toggled by the user to run when preparing to execute a task or after importing an inventory source.
Since Ansible 2.10, vars plugins can require whitelisting. Vars plugins that don't require whitelisting will run by default. To require whitelisting for your plugin set the class variable ``REQUIRES_WHITELIST``:
.. code-block:: python
class VarsModule(BaseVarsPlugin):
REQUIRES_WHITELIST = True
Include the ``vars_plugin_staging`` documentation fragment to allow users to determine when vars plugins run.
.. code-block:: python
DOCUMENTATION = '''
vars: custom_hostvars
version_added: "2.10"
short_description: Load custom host vars
description: Load custom host vars
options:
stage:
ini:
- key: stage
section: vars_custom_hostvars
env:
- name: ANSIBLE_VARS_PLUGIN_STAGE
extends_documentation_fragment:
- vars_plugin_staging
'''
Also since Ansible 2.10, vars plugins can reside in collections. Vars plugins in collections must require whitelisting to be functional.
For example vars plugins, see the source code for the `vars plugins included with Ansible Core
<https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/vars>`_.
.. seealso::
:ref:`all_modules`
List of all modules
:ref:`developing_api`
Learn about the Python API for task execution
:ref:`developing_inventory`
Learn about how to develop dynamic inventory sources
:ref:`developing_modules_general`
Learn about how to write Ansible modules
`Mailing List <https://groups.google.com/group/ansible-devel>`_
The development mailing list
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,092 |
cache plugins vs collections (jsonfile)
|
##### SUMMARY
From @DouglasHeriot in https://github.com/ansible/ansible/issues/69075#issuecomment-617563015
Another issue with the inventory docs: example of enabling caching uses `jsonfile`.
https://docs.ansible.com/ansible/devel/plugins/cache.html
It says that:
> You can use any cache plugin shipped with Ansible to cache inventory, but you cannot use a cache plugin inside a collection
Well, `jsonfile` has been moved to the `community.general` collection so no longer works. The only cache plugin part of ansible base is `memory`, which is a bit useless for dynamic inventories.
Will there be a way in 2.10 to use cache plugins from a collection?
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
cache
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/69092
|
https://github.com/ansible/ansible/pull/69100
|
f6860a7a89c30a3dce1ab54d4964dc188cba82c2
|
34458f3569be1b8592e0c433ce09c6add86893da
| 2020-04-22T09:25:50Z |
python
| 2020-05-05T20:10:57Z |
docs/docsite/rst/plugins/cache.rst
|
.. _cache_plugins:
Cache Plugins
=============
.. contents::
:local:
:depth: 2
Cache plugins allow Ansible to store gathered facts or inventory source data without the performance hit of retrieving them from source.
The default cache plugin is the :ref:`memory <memory_cache>` plugin, which only caches the data for the current execution of Ansible. Other plugins with persistent storage are available to allow caching the data across runs. Some of these cache plugins write to files, others write to databases.
You can use different cache plugins for inventory and facts. If you enable inventory caching without setting an inventory-specific cache plugin, Ansible uses the fact cache plugin for both facts and inventory.
.. _enabling_cache:
Enabling Fact Cache Plugins
---------------------------
Fact caching is always enabled. However, only one fact cache plugin can be active at a time. You can select the cache plugin to use for fact caching in the Ansible configuration, either with an environment variable:
.. code-block:: shell
export ANSIBLE_CACHE_PLUGIN=jsonfile
or in the ``ansible.cfg`` file:
.. code-block:: ini
[defaults]
fact_caching=redis
If the cache plugin is in a collection use the fully qualified name:
.. code-block:: ini
[defaults]
fact_caching = namespace.collection_name.cache_plugin_name
To enable a custom cache plugin, save it in a ``cache_plugins`` directory adjacent to your play, inside a role, or in one of the directory sources configured in :ref:`ansible.cfg <ansible_configuration_settings>`.
You also need to configure other settings specific to each plugin. Consult the individual plugin documentation or the Ansible :ref:`configuration <ansible_configuration_settings>` for more details.
Enabling Inventory Cache Plugins
--------------------------------
Inventory caching is disabled by default. To cache inventory data, you must enable inventory caching and then select the specific cache plugin you want to use. Not all inventory plugins support caching, so check the documentation for the inventory plugin(s) you want to use. You can enable inventory caching with an environment variable:
.. code-block:: shell
export ANSIBLE_INVENTORY_CACHE=True
or in the ``ansible.cfg`` file:
.. code-block:: ini
[inventory]
cache=True
or if the inventory plugin accepts a YAML configuration source, in the configuration file:
.. code-block:: yaml
# dev.aws_ec2.yaml
plugin: aws_ec2
cache: True
Only one inventory cache plugin can be active at a time. You can set it with an environment variable:
.. code-block:: shell
export ANSIBLE_INVENTORY_CACHE_PLUGIN=jsonfile
or in the ansible.cfg file:
.. code-block:: ini
[inventory]
cache_plugin=jsonfile
or if the inventory plugin accepts a YAML configuration source, in the configuration file:
.. code-block:: yaml
# dev.aws_ec2.yaml
plugin: aws_ec2
cache_plugin: jsonfile
To cache inventory with a custom plugin in your plugin path, follow the :ref:`developer guide on cache plugins<developing_cache_plugins>`.
You can use any cache plugin shipped with Ansible to cache inventory, but you cannot use a cache plugin inside a collection. If you enable caching for inventory plugins without selecting an inventory-specific cache plugin, Ansible falls back to caching inventory with the fact cache plugin you configured. Consult the individual inventory plugin documentation or the Ansible :ref:`configuration <ansible_configuration_settings>` for more details.
.. Note: In Ansible 2.7 and earlier, inventory plugins can only use file-based cache plugins, such as jsonfile, pickle, and yaml.
.. _using_cache:
Using Cache Plugins
-------------------
Cache plugins are used automatically once they are enabled.
.. _cache_plugin_list:
Plugin List
-----------
You can use ``ansible-doc -t cache -l`` to see the list of available plugins.
Use ``ansible-doc -t cache <plugin name>`` to see specific documentation and examples.
.. toctree:: :maxdepth: 1
:glob:
cache/*
.. seealso::
:ref:`action_plugins`
Ansible Action plugins
:ref:`callback_plugins`
Ansible callback plugins
:ref:`connection_plugins`
Ansible connection plugins
:ref:`inventory_plugins`
Ansible inventory plugins
:ref:`shell_plugins`
Ansible Shell plugins
:ref:`strategy_plugins`
Ansible Strategy plugins
:ref:`vars_plugins`
Ansible Vars plugins
`User Mailing List <https://groups.google.com/forum/#!forum/ansible-devel>`_
Have a question? Stop by the google group!
`webchat.freenode.net <https://webchat.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,092 |
cache plugins vs collections (jsonfile)
|
##### SUMMARY
From @DouglasHeriot in https://github.com/ansible/ansible/issues/69075#issuecomment-617563015
Another issue with the inventory docs: example of enabling caching uses `jsonfile`.
https://docs.ansible.com/ansible/devel/plugins/cache.html
It says that:
> You can use any cache plugin shipped with Ansible to cache inventory, but you cannot use a cache plugin inside a collection
Well, `jsonfile` has been moved to the `community.general` collection so no longer works. The only cache plugin part of ansible base is `memory`, which is a bit useless for dynamic inventories.
Will there be a way in 2.10 to use cache plugins from a collection?
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
cache
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/69092
|
https://github.com/ansible/ansible/pull/69100
|
f6860a7a89c30a3dce1ab54d4964dc188cba82c2
|
34458f3569be1b8592e0c433ce09c6add86893da
| 2020-04-22T09:25:50Z |
python
| 2020-05-05T20:10:57Z |
docs/docsite/rst/plugins/inventory.rst
|
.. _inventory_plugins:
Inventory Plugins
=================
.. contents::
:local:
:depth: 2
Inventory plugins allow users to point at data sources to compile the inventory of hosts that Ansible uses to target tasks, either via the ``-i /path/to/file`` and/or ``-i 'host1, host2'`` command line parameters or from other configuration sources.
.. _enabling_inventory:
Enabling inventory plugins
--------------------------
Most inventory plugins shipped with Ansible are disabled by default and need to be whitelisted in your
:ref:`ansible.cfg <ansible_configuration_settings>` file in order to function. This is how the default whitelist looks in the
config file that ships with Ansible:
.. code-block:: ini
[inventory]
enable_plugins = host_list, script, auto, yaml, ini, toml
This list also establishes the order in which each plugin tries to parse an inventory source. Any plugins left out of the list will not be considered, so you can 'optimize' your inventory loading by minimizing it to what you actually use. For example:
.. code-block:: ini
[inventory]
enable_plugins = advanced_host_list, constructed, yaml
The ``auto`` inventory plugin can be used to automatically determines which inventory plugin to use for a YAML configuration file. It can also be used for inventory plugins in a collection.
To whitelist specific inventory plugins in a collection you need to use the fully qualified name:
.. code-block:: ini
[inventory]
enable_plugins = namespace.collection_name.inventory_plugin_name
.. _using_inventory:
Using inventory plugins
-----------------------
The only requirement for using an inventory plugin after it is enabled is to provide an inventory source to parse.
Ansible will try to use the list of enabled inventory plugins, in order, against each inventory source provided.
Once an inventory plugin succeeds at parsing a source, any remaining inventory plugins will be skipped for that source.
To start using an inventory plugin with a YAML configuration source, create a file with the accepted filename schema for the plugin in question, then add ``plugin: plugin_name``. Each plugin documents any naming restrictions. For example, the aws_ec2 inventory plugin has to end with ``aws_ec2.(yml|yaml)``
.. code-block:: yaml
# demo.aws_ec2.yml
plugin: aws_ec2
Or for the openstack plugin the file has to be called ``clouds.yml`` or ``openstack.(yml|yaml)``:
.. code-block:: yaml
# clouds.yml or openstack.(yml|yaml)
plugin: openstack
To use a plugin in a collection provide the fully qualified name:
.. code-block:: yaml
plugin: namespace.collection_name.inventory_plugin_name
The ``auto`` inventory plugin is enabled by default and works by using the ``plugin`` field to indicate the plugin that should attempt to parse it. You can configure the whitelist/precedence of inventory plugins used to parse source using the `ansible.cfg` ['inventory'] ``enable_plugins`` list. After enabling the plugin and providing any required options, you can view the populated inventory with ``ansible-inventory -i demo.aws_ec2.yml --graph``:
.. code-block:: text
@all:
|--@aws_ec2:
| |--ec2-12-345-678-901.compute-1.amazonaws.com
| |--ec2-98-765-432-10.compute-1.amazonaws.com
|--@ungrouped:
If you are using an inventory plugin in a playbook-adjacent collection and want to test your setup with ``ansible-inventory``, you will need to use the ``--playbook-dir`` flag.
You can set the default inventory path (via ``inventory`` in the `ansible.cfg` [defaults] section or the :envvar:`ANSIBLE_INVENTORY` environment variable) to your inventory source(s). Now running ``ansible-inventory --graph`` should yield the same output as when you passed your YAML configuration source(s) directly. You can add custom inventory plugins to your plugin path to use in the same way.
Your inventory source might be a directory of inventory configuration files. The constructed inventory plugin only operates on those hosts already in inventory, so you may want the constructed inventory configuration parsed at a particular point (such as last). Ansible parses the directory recursively, alphabetically. You cannot configure the parsing approach, so name your files to make it work predictably. Inventory plugins that extend constructed features directly can work around that restriction by adding constructed options in addition to the inventory plugin options. Otherwise, you can use ``-i`` with multiple sources to impose a specific order, e.g. ``-i demo.aws_ec2.yml -i clouds.yml -i constructed.yml``.
You can create dynamic groups using host variables with the constructed ``keyed_groups`` option. The option ``groups`` can also be used to create groups and ``compose`` creates and modifies host variables. Here is an aws_ec2 example utilizing constructed features:
.. code-block:: yaml
# demo.aws_ec2.yml
plugin: aws_ec2
regions:
- us-east-1
- us-east-2
keyed_groups:
# add hosts to tag_Name_value groups for each aws_ec2 host's tags.Name variable
- key: tags.Name
prefix: tag_Name_
separator: ""
groups:
# add hosts to the group development if any of the dictionary's keys or values is the word 'devel'
development: "'devel' in (tags|list)"
compose:
# set the ansible_host variable to connect with the private IP address without changing the hostname
ansible_host: private_ip_address
Now the output of ``ansible-inventory -i demo.aws_ec2.yml --graph``:
.. code-block:: text
@all:
|--@aws_ec2:
| |--ec2-12-345-678-901.compute-1.amazonaws.com
| |--ec2-98-765-432-10.compute-1.amazonaws.com
| |--...
|--@development:
| |--ec2-12-345-678-901.compute-1.amazonaws.com
| |--ec2-98-765-432-10.compute-1.amazonaws.com
|--@tag_Name_ECS_Instance:
| |--ec2-98-765-432-10.compute-1.amazonaws.com
|--@tag_Name_Test_Server:
| |--ec2-12-345-678-901.compute-1.amazonaws.com
|--@ungrouped
If a host does not have the variables in the configuration above (i.e. ``tags.Name``, ``tags``, ``private_ip_address``), the host will not be added to groups other than those that the inventory plugin creates and the ``ansible_host`` host variable will not be modified.
If an inventory plugin supports caching, you can enable and set caching options for an individual YAML configuration source or for multiple inventory sources using environment variables or Ansible configuration files. If you enable caching for an inventory plugin without providing inventory-specific caching options, the inventory plugin will use fact-caching options. Here is an example of enabling caching for an individual YAML configuration file:
.. code-block:: yaml
# demo.aws_ec2.yml
plugin: aws_ec2
cache: yes
cache_plugin: jsonfile
cache_timeout: 7200
cache_connection: /tmp/aws_inventory
cache_prefix: aws_ec2
Here is an example of setting inventory caching with some fact caching defaults for the cache plugin used and the timeout in an ``ansible.cfg`` file:
.. code-block:: ini
[defaults]
fact_caching = jsonfile
fact_caching_connection = /tmp/ansible_facts
cache_timeout = 3600
[inventory]
cache = yes
cache_connection = /tmp/ansible_inventory
Besides cache plugins shipped with Ansible, cache plugins eligible for caching inventory can also reside in a custom cache plugin path. Cache plugins in collections are not supported yet for inventory.
.. _inventory_plugin_list:
Plugin List
-----------
You can use ``ansible-doc -t inventory -l`` to see the list of available plugins.
Use ``ansible-doc -t inventory <plugin name>`` to see plugin-specific documentation and examples.
.. toctree:: :maxdepth: 1
:glob:
inventory/*
.. seealso::
:ref:`about_playbooks`
An introduction to playbooks
:ref:`callback_plugins`
Ansible callback plugins
:ref:`connection_plugins`
Ansible connection plugins
:ref:`playbooks_filters`
Jinja2 filter plugins
:ref:`playbooks_tests`
Jinja2 test plugins
:ref:`playbooks_lookups`
Jinja2 lookup plugins
:ref:`vars_plugins`
Ansible vars plugins
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,092 |
cache plugins vs collections (jsonfile)
|
##### SUMMARY
From @DouglasHeriot in https://github.com/ansible/ansible/issues/69075#issuecomment-617563015
Another issue with the inventory docs: example of enabling caching uses `jsonfile`.
https://docs.ansible.com/ansible/devel/plugins/cache.html
It says that:
> You can use any cache plugin shipped with Ansible to cache inventory, but you cannot use a cache plugin inside a collection
Well, `jsonfile` has been moved to the `community.general` collection so no longer works. The only cache plugin part of ansible base is `memory`, which is a bit useless for dynamic inventories.
Will there be a way in 2.10 to use cache plugins from a collection?
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
cache
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/69092
|
https://github.com/ansible/ansible/pull/69100
|
f6860a7a89c30a3dce1ab54d4964dc188cba82c2
|
34458f3569be1b8592e0c433ce09c6add86893da
| 2020-04-22T09:25:50Z |
python
| 2020-05-05T20:10:57Z |
docs/docsite/rst/porting_guides/porting_guide_2.10.rst
|
.. _porting_2.10_guide:
**************************
Ansible 2.10 Porting Guide
**************************
.. warning::
Links on this page may not point to the most recent versions of modules. In preparation for the release of 2.10, many plugins and modules have migrated to Collections on `Ansible Galaxy <https://galaxy.ansible.com>`_. For the current development status of Collections and FAQ see `Ansible Collections Community Guide <https://github.com/ansible-collections/general/blob/master/README.rst>`_. We expect the 2.10 Porting Guide to change frequently up to the 2.10 release. Follow the conversations about collections on our various :ref:`communication` channels for the latest information on the status of the ``devel`` branch.
This section discusses the behavioral changes between Ansible 2.9 and Ansible 2.10.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `Ansible Changelog for 2.10 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.10.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
* Fixed a bug on boolean keywords that made random strings return 'False', now they should return an error if they are not a proper boolean
Example: `diff: yes-` was returning `False`.
Command Line
============
No notable changes
Deprecated
==========
* Windows Server 2008 and 2008 R2 will no longer be supported or tested in the next Ansible release, see :ref:`windows_faq_server2008`.
* The :ref:`win_stat <win_stat_module>` module has removed the deprecated ``get_md55`` option and ``md5`` return value.
* The :ref:`win_psexec <win_psexec_module>` module has removed the deprecated ``extra_opts`` option.
Modules
=======
.. warning::
Links on this page may not point to the most recent versions of modules. We will update them when we can.
Modules removed
---------------
The following modules no longer exist:
* letsencrypt use :ref:`acme_certificate <acme_certificate_module>` instead.
Deprecation notices
-------------------
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ldap_attr use ldap_attrs instead.
* vyos_static_route use vyos_static_routes instead.
The following functionality will be removed in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`openssl_csr <openssl_csr_module>` module's option ``version`` no longer supports values other than ``1`` (the current only standardized CSR version).
* :ref:`docker_container <docker_container_module>`: the ``trust_image_content`` option will be removed. It has always been ignored by the module.
* :ref:`iam_managed_policy <iam_managed_policy_module>`: the ``fail_on_delete`` option will be removed. It has always been ignored by the module.
* :ref:`s3_lifecycle <s3_lifecycle_module>`: the ``requester_pays`` option will be removed. It has always been ignored by the module.
* :ref:`s3_sync <s3_sync_module>`: the ``retries`` option will be removed. It has always been ignored by the module.
* The return values ``err`` and ``out`` of :ref:`docker_stack <docker_stack_module>` have been deprecated. Use ``stdout`` and ``stderr`` from now on instead.
* :ref:`cloudformation <cloudformation_module>`: the ``template_format`` option will be removed. It has been ignored by the module since Ansible 2.3.
* :ref:`data_pipeline <data_pipeline_module>`: the ``version`` option will be removed. It has always been ignored by the module.
* :ref:`ec2_eip <ec2_eip_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.3.
* :ref:`ec2_key <ec2_key_module>`: the ``wait`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_key <ec2_key_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_lc <ec2_lc_module>`: the ``associate_public_ip_address`` option will be removed. It has always been ignored by the module.
* :ref:`ec2_tag <ec2_tag_module>`: Support for ``list`` as a state has been deprecated. The ``ec2_tag_info`` can be used to fetch the tags on an EC2 resource.
* :ref:`iam_policy <iam_policy_module>`: the ``policy_document`` option will be removed. To maintain the existing behavior use the ``policy_json`` option and read the file with the ``lookup`` plugin.
* :ref:`redfish_config <redfish_config_module>`: the ``bios_attribute_name`` and ``bios_attribute_value`` options will be removed. To maintain the existing behavior use the ``bios_attributes`` option instead.
* :ref:`clc_aa_policy <clc_aa_policy_module>`: the ``wait`` parameter will be removed. It has always been ignored by the module.
* :ref:`redfish_config <redfish_config_module>`, :ref:`redfish_command <redfish_command_module>`: the behavior to select the first System, Manager, or Chassis resource to modify when multiple are present will be removed. Use the new ``resource_id`` option to specify target resource to modify.
* :ref:`win_domain_controller <win_domain_controller_module>`: the ``log_path`` option will be removed. This was undocumented and only related to debugging information for module development.
* :ref:`win_package <win_package_module>`: the ``username`` and ``password`` options will be removed. The same functionality can be done by using ``become: yes`` and ``become_flags: logon_type=new_credentials logon_flags=netcredentials_only`` on the task.
* :ref:`win_package <win_package_module>`: the ``ensure`` alias for the ``state`` option will be removed. Please use ``state`` instead of ``ensure``.
* :ref:`win_package <win_package_module>`: the ``productid`` alias for the ``product_id`` option will be removed. Please use ``product_id`` instead of ``productid``.
The following functionality will change in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`docker_container <docker_container_module>` module has a new option, ``container_default_behavior``, whose default value will change from ``compatibility`` to ``no_defaults``. Set to an explicit value to avoid deprecation warnings.
* The :ref:`docker_container <docker_container_module>` module's ``network_mode`` option will be set by default to the name of the first network in ``networks`` if at least one network is given and ``networks_cli_compatible`` is ``true`` (will be default from Ansible 2.12 on). Set to an explicit value to avoid deprecation warnings if you specify networks and set ``networks_cli_compatible`` to ``true``. The current default (not specifying it) is equivalent to the value ``default``.
* :ref:`ec2 <ec2_module>`: the ``group`` and ``group_id`` options will become mutually exclusive. Currently ``group_id`` is ignored if you pass both.
* :ref:`iam_policy <iam_policy_module>`: the default value for the ``skip_duplicates`` option will change from ``true`` to ``false``. To maintain the existing behavior explicitly set it to ``true``.
* :ref:`iam_role <iam_role_module>`: the ``purge_policies`` option (also know as ``purge_policy``) default value will change from ``true`` to ``false``
* :ref:`elb_network_lb <elb_network_lb_module>`: the default behaviour for the ``state`` option will change from ``absent`` to ``present``. To maintain the existing behavior explicitly set state to ``absent``.
* :ref:`vmware_tag_info <vmware_tag_info_module>`: the module will not return ``tag_facts`` since it does not return multiple tags with the same name and different category id. To maintain the existing behavior use ``tag_info`` which is a list of tag metadata.
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ``vmware_dns_config`` use vmware_host_dns instead.
Noteworthy module changes
-------------------------
* The ``datacenter`` option has been removed from :ref:`vmware_guest_find <vmware_guest_find_module>`
* The options ``ip_address`` and ``subnet_mask`` have been removed from :ref:`vmware_vmkernel <vmware_vmkernel_module>`; use the suboptions ``ip_address`` and ``subnet_mask`` of the ``network`` option instead.
* Ansible modules created with ``add_file_common_args=True`` added a number of undocumented arguments which were mostly there to ease implementing certain action plugins. The undocumented arguments ``src``, ``follow``, ``force``, ``content``, ``backup``, ``remote_src``, ``regexp``, ``delimiter``, and ``directory_mode`` are now no longer added. Modules relying on these options to be added need to specify them by themselves.
* The ``AWSRetry`` decorator no longer catches ``NotFound`` exceptions by default. ``NotFound`` exceptions need to be explicitly added using ``catch_extra_error_codes``. Some AWS modules may see an increase in transient failures due to AWS's eventual consistency model.
* :ref:`vmware_datastore_maintenancemode <vmware_datastore_maintenancemode_module>` now returns ``datastore_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_kernel_manager <vmware_host_kernel_manager_module>` now returns ``host_kernel_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_ntp <vmware_host_ntp_module>` now returns ``host_ntp_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_service_manager <vmware_host_service_manager_module>` now returns ``host_service_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_tag <vmware_tag_module>` now returns ``tag_status`` instead of Ansible internal key ``results``.
* The deprecated ``recurse`` option in :ref:`pacman <pacman_module>` module has been removed, you should use ``extra_args=--recursive`` instead.
* :ref:`vmware_guest_custom_attributes <vmware_guest_custom_attributes_module>` module does not require VM name which was a required parameter for releases prior to Ansible 2.10.
* :ref:`zabbix_action <zabbix_action_module>` no longer requires ``esc_period`` and ``event_source`` arguments when ``state=absent``.
* :ref:`zabbix_proxy <zabbix_proxy_module>` deprecates ``interface`` sub-options ``type`` and ``main`` when proxy type is set to passive via ``status=passive``. Make sure these suboptions are removed from your playbook as they were never supported by Zabbix in the first place.
* :ref:`gitlab_user <gitlab_user_module>` no longer requires ``name``, ``email`` and ``password`` arguments when ``state=absent``.
* :ref:`win_pester <win_pester_module>` no longer runs all ``*.ps1`` file in the directory specified due to it executing potentially unknown scripts. It will follow the default behaviour of only running tests for files that are like ``*.tests.ps1`` which is built into Pester itself
* :ref:`win_find <win_find_module>` has been refactored to better match the behaviour of the ``find`` module. Here is what has changed:
* When the directory specified by ``paths`` does not exist or is a file, it will no longer fail and will just warn the user
* Junction points are no longer reported as ``islnk``, use ``isjunction`` to properly report these files. This behaviour matches the :ref:`win_stat <win_stat_module>`
* Directories no longer return a ``size``, this matches the ``stat`` and ``find`` behaviour and has been removed due to the difficulties in correctly reporting the size of a directory
* :ref:`docker_container <docker_container_module>` no longer passes information on non-anonymous volumes or binds as ``Volumes`` to the Docker daemon. This increases compatibility with the ``docker`` CLI program. Note that if you specify ``volumes: strict`` in ``comparisons``, this could cause existing containers created with docker_container from Ansible 2.9 or earlier to restart.
* :ref:`docker_container <docker_container_module>`'s support for port ranges was adjusted to be more compatible to the ``docker`` command line utility: a one-port container range combined with a multiple-port host range will no longer result in only the first host port be used, but the whole range being passed to Docker so that a free port in that range will be used.
* :ref:`purefb_fs <purefb_fs_module>` no longer supports the deprecated ``nfs`` option. This has been superceeded by ``nfsv3``.
* :ref:`nxos_igmp_interface <nxos_igmp_interface_module>` no longer supports the deprecated ``oif_prefix`` and ``oif_source`` options. These have been superceeded by ``oif_ps``.
* :ref:`aws_s3 <aws_s3_module>` can now delete versioned buckets even when they are not empty - set mode to delete to delete a versioned bucket and everything in it.
* The parameter ``message`` in :ref:`grafana_dashboard <grafana_dashboard_module>` module is renamed to ``commit_message`` since ``message`` is used by Ansible Core engine internally.
* The parameter ``message`` in :ref:`datadog_monitor <datadog_monitor_module>` module is renamed to ``notification_message`` since ``message`` is used by Ansible Core engine internally.
* The parameter ``message`` in :ref:`bigpanda <bigpanda_module>` module is renamed to ``deployment_message`` since ``message`` is used by Ansible Core engine internally.
Plugins
=======
Lookup plugin names case-sensitivity
------------------------------------
* Prior to Ansible ``2.10`` lookup plugin names passed in as an argument to the ``lookup()`` function were treated as case-insensitive as opposed to lookups invoked via ``with_<lookup_name>``. ``2.10`` brings consistency to ``lookup()`` and ``with_`` to be both case-sensitive.
Noteworthy plugin changes
-------------------------
* The ``hashi_vault`` lookup plugin now returns the latest version when using the KV v2 secrets engine. Previously, it returned all versions of the secret which required additional steps to extract and filter the desired version.
* Some undocumented arguments from ``FILE_COMMON_ARGUMENTS`` have been removed; plugins using these, in particular action plugins, need to be adjusted. The undocumented arguments which were removed are ``src``, ``follow``, ``force``, ``content``, ``backup``, ``remote_src``, ``regexp``, ``delimiter``, and ``directory_mode``.
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,595 |
Make sure systemd-python is installed instead of systemd
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
There are two packages on PYPI: [systemd (0.16.1)](https://pypi.org/project/systemd/), and [systemd-python (234)](https://pypi.org/project/systemd-python/) (which is the correct package)
If the target machine has systemd (0.16.1) installed with `pip3 install systemd` instead of `pip3 install systemd-python`, ansible fails on any module that uses `journal.sendv()` with an error saying:
```
File "systemd/_journal.pyx", line 32, in systemd._journal._send
ValueError: Key name may not begin with an underscore
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`ansible/module_utils/basic.py`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.2
config file = None
configured module search path = [u'/Users/rylanpolster/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Library/Python/2.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
-
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target: Raspbian Stretch Lite (November 2018)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
On target:
```
pip3 uninstall systemd-python
pip3 install systemd
```
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: ls
command: ls
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
`changed: [pi]`
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
The full traceback is:
Traceback (most recent call last):
File "<stdin>", line 114, in <module>
File "<stdin>", line 106, in _ansiballz_main
File "<stdin>", line 49, in invoke_module
File "/usr/lib/python3.5/imp.py", line 234, in load_module
return load_source(name, filename, file)
File "/usr/lib/python3.5/imp.py", line 170, in load_source
module = _exec(spec, sys.modules[name])
File "<frozen importlib._bootstrap>", line 626, in _exec
File "<frozen importlib._bootstrap_external>", line 673, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "/tmp/ansible_command_payload_afjk26dr/__main__.py", line 327, in <module>
File "/tmp/ansible_command_payload_afjk26dr/__main__.py", line 228, in main
File "/tmp/ansible_command_payload_afjk26dr/ansible_command_payload.zip/ansible/module_utils/basic.py", line 691, in __init__
File "/tmp/ansible_command_payload_afjk26dr/ansible_command_payload.zip/ansible/module_utils/basic.py", line 1940, in _log_invocation
File "/tmp/ansible_command_payload_afjk26dr/ansible_command_payload.zip/ansible/module_utils/basic.py", line 1898, in log
File "systemd/_journal.pyx", line 68, in systemd._journal.send
File "systemd/_journal.pyx", line 32, in systemd._journal._send
ValueError: Key name may not begin with an underscore
fatal: [pi]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 114, in <module>\n File \"<stdin>\", line 106, in _ansiballz_main\n File \"<stdin>\", line 49, in invoke_module\n File \"/usr/lib/python3.5/imp.py\", line 234, in load_module\n return load_source(name, filename, file)\n File \"/usr/lib/python3.5/imp.py\", line 170, in load_source\n module = _exec(spec, sys.modules[name])\n File \"<frozen importlib._bootstrap>\", line 626, in _exec\n File \"<frozen importlib._bootstrap_external>\", line 673, in exec_module\n File \"<frozen importlib._bootstrap>\", line 222, in _call_with_frames_removed\n File \"/tmp/ansible_command_payload_afjk26dr/__main__.py\", line 327, in <module>\n File \"/tmp/ansible_command_payload_afjk26dr/__main__.py\", line 228, in main\n File \"/tmp/ansible_command_payload_afjk26dr/ansible_command_payload.zip/ansible/module_utils/basic.py\", line 691, in __init__\n File \"/tmp/ansible_command_payload_afjk26dr/ansible_command_payload.zip/ansible/module_utils/basic.py\", line 1940, in _log_invocation\n File \"/tmp/ansible_command_payload_afjk26dr/ansible_command_payload.zip/ansible/module_utils/basic.py\", line 1898, in log\n File \"systemd/_journal.pyx\", line 68, in systemd._journal.send\n File \"systemd/_journal.pyx\", line 32, in systemd._journal._send\nValueError: Key name may not begin with an underscore\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
|
https://github.com/ansible/ansible/issues/60595
|
https://github.com/ansible/ansible/pull/60692
|
b309c142655888829200adbb356f12904effd98a
|
eb40ecc843d2afe530a2cb5b31c733373e52f7b6
| 2019-08-14T18:45:53Z |
python
| 2020-05-12T05:31:08Z |
changelogs/fragments/60595-systemd_import.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,595 |
Make sure systemd-python is installed instead of systemd
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
There are two packages on PYPI: [systemd (0.16.1)](https://pypi.org/project/systemd/), and [systemd-python (234)](https://pypi.org/project/systemd-python/) (which is the correct package)
If the target machine has systemd (0.16.1) installed with `pip3 install systemd` instead of `pip3 install systemd-python`, ansible fails on any module that uses `journal.sendv()` with an error saying:
```
File "systemd/_journal.pyx", line 32, in systemd._journal._send
ValueError: Key name may not begin with an underscore
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`ansible/module_utils/basic.py`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.2
config file = None
configured module search path = [u'/Users/rylanpolster/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Library/Python/2.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
-
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target: Raspbian Stretch Lite (November 2018)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
On target:
```
pip3 uninstall systemd-python
pip3 install systemd
```
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: ls
command: ls
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
`changed: [pi]`
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
The full traceback is:
Traceback (most recent call last):
File "<stdin>", line 114, in <module>
File "<stdin>", line 106, in _ansiballz_main
File "<stdin>", line 49, in invoke_module
File "/usr/lib/python3.5/imp.py", line 234, in load_module
return load_source(name, filename, file)
File "/usr/lib/python3.5/imp.py", line 170, in load_source
module = _exec(spec, sys.modules[name])
File "<frozen importlib._bootstrap>", line 626, in _exec
File "<frozen importlib._bootstrap_external>", line 673, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "/tmp/ansible_command_payload_afjk26dr/__main__.py", line 327, in <module>
File "/tmp/ansible_command_payload_afjk26dr/__main__.py", line 228, in main
File "/tmp/ansible_command_payload_afjk26dr/ansible_command_payload.zip/ansible/module_utils/basic.py", line 691, in __init__
File "/tmp/ansible_command_payload_afjk26dr/ansible_command_payload.zip/ansible/module_utils/basic.py", line 1940, in _log_invocation
File "/tmp/ansible_command_payload_afjk26dr/ansible_command_payload.zip/ansible/module_utils/basic.py", line 1898, in log
File "systemd/_journal.pyx", line 68, in systemd._journal.send
File "systemd/_journal.pyx", line 32, in systemd._journal._send
ValueError: Key name may not begin with an underscore
fatal: [pi]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 114, in <module>\n File \"<stdin>\", line 106, in _ansiballz_main\n File \"<stdin>\", line 49, in invoke_module\n File \"/usr/lib/python3.5/imp.py\", line 234, in load_module\n return load_source(name, filename, file)\n File \"/usr/lib/python3.5/imp.py\", line 170, in load_source\n module = _exec(spec, sys.modules[name])\n File \"<frozen importlib._bootstrap>\", line 626, in _exec\n File \"<frozen importlib._bootstrap_external>\", line 673, in exec_module\n File \"<frozen importlib._bootstrap>\", line 222, in _call_with_frames_removed\n File \"/tmp/ansible_command_payload_afjk26dr/__main__.py\", line 327, in <module>\n File \"/tmp/ansible_command_payload_afjk26dr/__main__.py\", line 228, in main\n File \"/tmp/ansible_command_payload_afjk26dr/ansible_command_payload.zip/ansible/module_utils/basic.py\", line 691, in __init__\n File \"/tmp/ansible_command_payload_afjk26dr/ansible_command_payload.zip/ansible/module_utils/basic.py\", line 1940, in _log_invocation\n File \"/tmp/ansible_command_payload_afjk26dr/ansible_command_payload.zip/ansible/module_utils/basic.py\", line 1898, in log\n File \"systemd/_journal.pyx\", line 68, in systemd._journal.send\n File \"systemd/_journal.pyx\", line 32, in systemd._journal._send\nValueError: Key name may not begin with an underscore\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
|
https://github.com/ansible/ansible/issues/60595
|
https://github.com/ansible/ansible/pull/60692
|
b309c142655888829200adbb356f12904effd98a
|
eb40ecc843d2afe530a2cb5b31c733373e52f7b6
| 2019-08-14T18:45:53Z |
python
| 2020-05-12T05:31:08Z |
lib/ansible/module_utils/basic.py
|
# Copyright (c), Michael DeHaan <[email protected]>, 2012-2013
# Copyright (c), Toshio Kuratomi <[email protected]> 2016
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
FILE_ATTRIBUTES = {
'A': 'noatime',
'a': 'append',
'c': 'compressed',
'C': 'nocow',
'd': 'nodump',
'D': 'dirsync',
'e': 'extents',
'E': 'encrypted',
'h': 'blocksize',
'i': 'immutable',
'I': 'indexed',
'j': 'journalled',
'N': 'inline',
's': 'zero',
'S': 'synchronous',
't': 'notail',
'T': 'blockroot',
'u': 'undelete',
'X': 'compressedraw',
'Z': 'compresseddirty',
}
# Ansible modules can be written in any language.
# The functions available here can be used to do many common tasks,
# to simplify development of Python modules.
import __main__
import atexit
import errno
import datetime
import grp
import fcntl
import locale
import os
import pwd
import platform
import re
import select
import shlex
import shutil
import signal
import stat
import subprocess
import sys
import tempfile
import time
import traceback
import types
from collections import deque
from itertools import chain, repeat
try:
import syslog
HAS_SYSLOG = True
except ImportError:
HAS_SYSLOG = False
try:
from systemd import journal
has_journal = True
except ImportError:
has_journal = False
HAVE_SELINUX = False
try:
import selinux
HAVE_SELINUX = True
except ImportError:
pass
# Python2 & 3 way to get NoneType
NoneType = type(None)
from ._text import to_native, to_bytes, to_text
from ansible.module_utils.common.text.converters import (
jsonify,
container_to_bytes as json_dict_unicode_to_bytes,
container_to_text as json_dict_bytes_to_unicode,
)
from ansible.module_utils.common.text.formatters import (
lenient_lowercase,
bytes_to_human,
human_to_bytes,
SIZE_RANGES,
)
try:
from ansible.module_utils.common._json_compat import json
except ImportError as e:
print('\n{{"msg": "Error: ansible requires the stdlib json: {0}", "failed": true}}'.format(to_native(e)))
sys.exit(1)
AVAILABLE_HASH_ALGORITHMS = dict()
try:
import hashlib
# python 2.7.9+ and 2.7.0+
for attribute in ('available_algorithms', 'algorithms'):
algorithms = getattr(hashlib, attribute, None)
if algorithms:
break
if algorithms is None:
# python 2.5+
algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512')
for algorithm in algorithms:
AVAILABLE_HASH_ALGORITHMS[algorithm] = getattr(hashlib, algorithm)
# we may have been able to import md5 but it could still not be available
try:
hashlib.md5()
except ValueError:
AVAILABLE_HASH_ALGORITHMS.pop('md5', None)
except Exception:
import sha
AVAILABLE_HASH_ALGORITHMS = {'sha1': sha.sha}
try:
import md5
AVAILABLE_HASH_ALGORITHMS['md5'] = md5.md5
except Exception:
pass
from ansible.module_utils.common._collections_compat import (
KeysView,
Mapping, MutableMapping,
Sequence, MutableSequence,
Set, MutableSet,
)
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.file import (
_PERM_BITS as PERM_BITS,
_EXEC_PERM_BITS as EXEC_PERM_BITS,
_DEFAULT_PERM as DEFAULT_PERM,
is_executable,
format_attributes,
get_flags_from_attributes,
)
from ansible.module_utils.common.sys_info import (
get_distribution,
get_distribution_version,
get_platform_subclass,
)
from ansible.module_utils.pycompat24 import get_exception, literal_eval
from ansible.module_utils.common.parameters import (
handle_aliases,
list_deprecations,
list_no_log_values,
PASS_VARS,
PASS_BOOLS,
)
from ansible.module_utils.six import (
PY2,
PY3,
b,
binary_type,
integer_types,
iteritems,
string_types,
text_type,
)
from ansible.module_utils.six.moves import map, reduce, shlex_quote
from ansible.module_utils.common.validation import (
check_missing_parameters,
check_mutually_exclusive,
check_required_arguments,
check_required_by,
check_required_if,
check_required_one_of,
check_required_together,
count_terms,
check_type_bool,
check_type_bits,
check_type_bytes,
check_type_float,
check_type_int,
check_type_jsonarg,
check_type_list,
check_type_dict,
check_type_path,
check_type_raw,
check_type_str,
safe_eval,
)
from ansible.module_utils.common._utils import get_all_subclasses as _get_all_subclasses
from ansible.module_utils.parsing.convert_bool import BOOLEANS, BOOLEANS_FALSE, BOOLEANS_TRUE, boolean
from ansible.module_utils.common.warnings import (
deprecate,
get_deprecation_messages,
get_warning_messages,
warn,
)
# Note: When getting Sequence from collections, it matches with strings. If
# this matters, make sure to check for strings before checking for sequencetype
SEQUENCETYPE = frozenset, KeysView, Sequence
PASSWORD_MATCH = re.compile(r'^(?:.+[-_\s])?pass(?:[-_\s]?(?:word|phrase|wrd|wd)?)(?:[-_\s].+)?$', re.I)
imap = map
try:
# Python 2
unicode
except NameError:
# Python 3
unicode = text_type
try:
# Python 2
basestring
except NameError:
# Python 3
basestring = string_types
_literal_eval = literal_eval
# End of deprecated names
# Internal global holding passed in params. This is consulted in case
# multiple AnsibleModules are created. Otherwise each AnsibleModule would
# attempt to read from stdin. Other code should not use this directly as it
# is an internal implementation detail
_ANSIBLE_ARGS = None
FILE_COMMON_ARGUMENTS = dict(
# These are things we want. About setting metadata (mode, ownership, permissions in general) on
# created files (these are used by set_fs_attributes_if_different and included in
# load_file_common_arguments)
mode=dict(type='raw'),
owner=dict(type='str'),
group=dict(type='str'),
seuser=dict(type='str'),
serole=dict(type='str'),
selevel=dict(type='str'),
setype=dict(type='str'),
attributes=dict(type='str', aliases=['attr']),
unsafe_writes=dict(type='bool', default=False), # should be available to any module using atomic_move
)
PASSWD_ARG_RE = re.compile(r'^[-]{0,2}pass[-]?(word|wd)?')
# Used for parsing symbolic file perms
MODE_OPERATOR_RE = re.compile(r'[+=-]')
USERS_RE = re.compile(r'[^ugo]')
PERMS_RE = re.compile(r'[^rwxXstugo]')
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
_PY3_MIN = sys.version_info[:2] >= (3, 5)
_PY2_MIN = (2, 6) <= sys.version_info[:2] < (3,)
_PY_MIN = _PY3_MIN or _PY2_MIN
if not _PY_MIN:
print(
'\n{"failed": true, '
'"msg": "Ansible requires a minimum of Python2 version 2.6 or Python3 version 3.5. Current version: %s"}' % ''.join(sys.version.splitlines())
)
sys.exit(1)
#
# Deprecated functions
#
def get_platform():
'''
**Deprecated** Use :py:func:`platform.system` directly.
:returns: Name of the platform the module is running on in a native string
Returns a native string that labels the platform ("Linux", "Solaris", etc). Currently, this is
the result of calling :py:func:`platform.system`.
'''
return platform.system()
# End deprecated functions
#
# Compat shims
#
def load_platform_subclass(cls, *args, **kwargs):
"""**Deprecated**: Use ansible.module_utils.common.sys_info.get_platform_subclass instead"""
platform_cls = get_platform_subclass(cls)
return super(cls, platform_cls).__new__(platform_cls)
def get_all_subclasses(cls):
"""**Deprecated**: Use ansible.module_utils.common._utils.get_all_subclasses instead"""
return list(_get_all_subclasses(cls))
# End compat shims
def _remove_values_conditions(value, no_log_strings, deferred_removals):
"""
Helper function for :meth:`remove_values`.
:arg value: The value to check for strings that need to be stripped
:arg no_log_strings: set of strings which must be stripped out of any values
:arg deferred_removals: List which holds information about nested
containers that have to be iterated for removals. It is passed into
this function so that more entries can be added to it if value is
a container type. The format of each entry is a 2-tuple where the first
element is the ``value`` parameter and the second value is a new
container to copy the elements of ``value`` into once iterated.
:returns: if ``value`` is a scalar, returns ``value`` with two exceptions:
1. :class:`~datetime.datetime` objects which are changed into a string representation.
2. objects which are in no_log_strings are replaced with a placeholder
so that no sensitive data is leaked.
If ``value`` is a container type, returns a new empty container.
``deferred_removals`` is added to as a side-effect of this function.
.. warning:: It is up to the caller to make sure the order in which value
is passed in is correct. For instance, higher level containers need
to be passed in before lower level containers. For example, given
``{'level1': {'level2': 'level3': [True]} }`` first pass in the
dictionary for ``level1``, then the dict for ``level2``, and finally
the list for ``level3``.
"""
if isinstance(value, (text_type, binary_type)):
# Need native str type
native_str_value = value
if isinstance(value, text_type):
value_is_text = True
if PY2:
native_str_value = to_bytes(value, errors='surrogate_or_strict')
elif isinstance(value, binary_type):
value_is_text = False
if PY3:
native_str_value = to_text(value, errors='surrogate_or_strict')
if native_str_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
native_str_value = native_str_value.replace(omit_me, '*' * 8)
if value_is_text and isinstance(native_str_value, binary_type):
value = to_text(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
elif not value_is_text and isinstance(native_str_value, text_type):
value = to_bytes(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
else:
value = native_str_value
elif isinstance(value, Sequence):
if isinstance(value, MutableSequence):
new_value = type(value)()
else:
new_value = [] # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Set):
if isinstance(value, MutableSet):
new_value = type(value)()
else:
new_value = set() # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Mapping):
if isinstance(value, MutableMapping):
new_value = type(value)()
else:
new_value = {} # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))):
stringy_value = to_native(value, encoding='utf-8', errors='surrogate_or_strict')
if stringy_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
if omit_me in stringy_value:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
elif isinstance(value, datetime.datetime):
value = value.isoformat()
else:
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
return value
def remove_values(value, no_log_strings):
""" Remove strings in no_log_strings from value. If value is a container
type, then remove a lot more"""
deferred_removals = deque()
no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings]
new_value = _remove_values_conditions(value, no_log_strings, deferred_removals)
while deferred_removals:
old_data, new_data = deferred_removals.popleft()
if isinstance(new_data, Mapping):
for old_key, old_elem in old_data.items():
new_elem = _remove_values_conditions(old_elem, no_log_strings, deferred_removals)
new_data[old_key] = new_elem
else:
for elem in old_data:
new_elem = _remove_values_conditions(elem, no_log_strings, deferred_removals)
if isinstance(new_data, MutableSequence):
new_data.append(new_elem)
elif isinstance(new_data, MutableSet):
new_data.add(new_elem)
else:
raise TypeError('Unknown container type encountered when removing private values from output')
return new_value
def heuristic_log_sanitize(data, no_log_values=None):
''' Remove strings that look like passwords from log messages '''
# Currently filters:
# user:pass@foo/whatever and http://username:pass@wherever/foo
# This code has false positives and consumes parts of logs that are
# not passwds
# begin: start of a passwd containing string
# end: end of a passwd containing string
# sep: char between user and passwd
# prev_begin: where in the overall string to start a search for
# a passwd
# sep_search_end: where in the string to end a search for the sep
data = to_native(data)
output = []
begin = len(data)
prev_begin = begin
sep = 1
while sep:
# Find the potential end of a passwd
try:
end = data.rindex('@', 0, begin)
except ValueError:
# No passwd in the rest of the data
output.insert(0, data[0:begin])
break
# Search for the beginning of a passwd
sep = None
sep_search_end = end
while not sep:
# URL-style username+password
try:
begin = data.rindex('://', 0, sep_search_end)
except ValueError:
# No url style in the data, check for ssh style in the
# rest of the string
begin = 0
# Search for separator
try:
sep = data.index(':', begin + 3, end)
except ValueError:
# No separator; choices:
if begin == 0:
# Searched the whole string so there's no password
# here. Return the remaining data
output.insert(0, data[0:begin])
break
# Search for a different beginning of the password field.
sep_search_end = begin
continue
if sep:
# Password was found; remove it.
output.insert(0, data[end:prev_begin])
output.insert(0, '********')
output.insert(0, data[begin:sep + 1])
prev_begin = begin
output = ''.join(output)
if no_log_values:
output = remove_values(output, no_log_values)
return output
def _load_params():
''' read the modules parameters and store them globally.
This function may be needed for certain very dynamic custom modules which
want to process the parameters that are being handed the module. Since
this is so closely tied to the implementation of modules we cannot
guarantee API stability for it (it may change between versions) however we
will try not to break it gratuitously. It is certainly more future-proof
to call this function and consume its outputs than to implement the logic
inside it as a copy in your own code.
'''
global _ANSIBLE_ARGS
if _ANSIBLE_ARGS is not None:
buffer = _ANSIBLE_ARGS
else:
# debug overrides to read args from file or cmdline
# Avoid tracebacks when locale is non-utf8
# We control the args and we pass them as utf8
if len(sys.argv) > 1:
if os.path.isfile(sys.argv[1]):
fd = open(sys.argv[1], 'rb')
buffer = fd.read()
fd.close()
else:
buffer = sys.argv[1]
if PY3:
buffer = buffer.encode('utf-8', errors='surrogateescape')
# default case, read from stdin
else:
if PY2:
buffer = sys.stdin.read()
else:
buffer = sys.stdin.buffer.read()
_ANSIBLE_ARGS = buffer
try:
params = json.loads(buffer.decode('utf-8'))
except ValueError:
# This helper used too early for fail_json to work.
print('\n{"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed", "failed": true}')
sys.exit(1)
if PY2:
params = json_dict_unicode_to_bytes(params)
try:
return params['ANSIBLE_MODULE_ARGS']
except KeyError:
# This helper does not have access to fail_json so we have to print
# json output on our own.
print('\n{"msg": "Error: Module unable to locate ANSIBLE_MODULE_ARGS in json data from stdin. Unable to figure out what parameters were passed", '
'"failed": true}')
sys.exit(1)
def env_fallback(*args, **kwargs):
''' Load value from environment '''
for arg in args:
if arg in os.environ:
return os.environ[arg]
raise AnsibleFallbackNotFound
def missing_required_lib(library, reason=None, url=None):
hostname = platform.node()
msg = "Failed to import the required Python library (%s) on %s's Python %s." % (library, hostname, sys.executable)
if reason:
msg += " This is required %s." % reason
if url:
msg += " See %s for more info." % url
msg += (" Please read the module documentation and install it in the appropriate location."
" If the required library is installed, but Ansible is using the wrong Python interpreter,"
" please consult the documentation on ansible_python_interpreter")
return msg
class AnsibleFallbackNotFound(Exception):
pass
class AnsibleModule(object):
def __init__(self, argument_spec, bypass_checks=False, no_log=False,
mutually_exclusive=None, required_together=None,
required_one_of=None, add_file_common_args=False,
supports_check_mode=False, required_if=None, required_by=None):
'''
Common code for quickly building an ansible module in Python
(although you can write modules with anything that can return JSON).
See :ref:`developing_modules_general` for a general introduction
and :ref:`developing_program_flow_modules` for more detailed explanation.
'''
self._name = os.path.basename(__file__) # initialize name until we can parse from options
self.argument_spec = argument_spec
self.supports_check_mode = supports_check_mode
self.check_mode = False
self.bypass_checks = bypass_checks
self.no_log = no_log
self.mutually_exclusive = mutually_exclusive
self.required_together = required_together
self.required_one_of = required_one_of
self.required_if = required_if
self.required_by = required_by
self.cleanup_files = []
self._debug = False
self._diff = False
self._socket_path = None
self._shell = None
self._verbosity = 0
# May be used to set modifications to the environment for any
# run_command invocation
self.run_command_environ_update = {}
self._clean = {}
self._string_conversion_action = ''
self.aliases = {}
self._legal_inputs = []
self._options_context = list()
self._tmpdir = None
if add_file_common_args:
for k, v in FILE_COMMON_ARGUMENTS.items():
if k not in self.argument_spec:
self.argument_spec[k] = v
self._load_params()
self._set_fallbacks()
# append to legal_inputs and then possibly check against them
try:
self.aliases = self._handle_aliases()
except (ValueError, TypeError) as e:
# Use exceptions here because it isn't safe to call fail_json until no_log is processed
print('\n{"failed": true, "msg": "Module alias error: %s"}' % to_native(e))
sys.exit(1)
# Save parameter values that should never be logged
self.no_log_values = set()
self._handle_no_log_values()
# check the locale as set by the current environment, and reset to
# a known valid (LANG=C) if it's an invalid/unavailable locale
self._check_locale()
self._check_arguments()
# check exclusive early
if not bypass_checks:
self._check_mutually_exclusive(mutually_exclusive)
self._set_defaults(pre=True)
self._CHECK_ARGUMENT_TYPES_DISPATCHER = {
'str': self._check_type_str,
'list': self._check_type_list,
'dict': self._check_type_dict,
'bool': self._check_type_bool,
'int': self._check_type_int,
'float': self._check_type_float,
'path': self._check_type_path,
'raw': self._check_type_raw,
'jsonarg': self._check_type_jsonarg,
'json': self._check_type_jsonarg,
'bytes': self._check_type_bytes,
'bits': self._check_type_bits,
}
if not bypass_checks:
self._check_required_arguments()
self._check_argument_types()
self._check_argument_values()
self._check_required_together(required_together)
self._check_required_one_of(required_one_of)
self._check_required_if(required_if)
self._check_required_by(required_by)
self._set_defaults(pre=False)
# deal with options sub-spec
self._handle_options()
if not self.no_log:
self._log_invocation()
# finally, make sure we're in a sane working dir
self._set_cwd()
@property
def tmpdir(self):
# if _ansible_tmpdir was not set and we have a remote_tmp,
# the module needs to create it and clean it up once finished.
# otherwise we create our own module tmp dir from the system defaults
if self._tmpdir is None:
basedir = None
if self._remote_tmp is not None:
basedir = os.path.expanduser(os.path.expandvars(self._remote_tmp))
if basedir is not None and not os.path.exists(basedir):
try:
os.makedirs(basedir, mode=0o700)
except (OSError, IOError) as e:
self.warn("Unable to use %s as temporary directory, "
"failing back to system: %s" % (basedir, to_native(e)))
basedir = None
else:
self.warn("Module remote_tmp %s did not exist and was "
"created with a mode of 0700, this may cause"
" issues when running as another user. To "
"avoid this, create the remote_tmp dir with "
"the correct permissions manually" % basedir)
basefile = "ansible-moduletmp-%s-" % time.time()
try:
tmpdir = tempfile.mkdtemp(prefix=basefile, dir=basedir)
except (OSError, IOError) as e:
self.fail_json(
msg="Failed to create remote module tmp path at dir %s "
"with prefix %s: %s" % (basedir, basefile, to_native(e))
)
if not self._keep_remote_files:
atexit.register(shutil.rmtree, tmpdir)
self._tmpdir = tmpdir
return self._tmpdir
def warn(self, warning):
warn(warning)
self.log('[WARNING] %s' % warning)
def deprecate(self, msg, version=None):
deprecate(msg, version)
self.log('[DEPRECATION WARNING] %s %s' % (msg, version))
def load_file_common_arguments(self, params, path=None):
'''
many modules deal with files, this encapsulates common
options that the file module accepts such that it is directly
available to all modules and they can share code.
Allows to overwrite the path/dest module argument by providing path.
'''
if path is None:
path = params.get('path', params.get('dest', None))
if path is None:
return {}
else:
path = os.path.expanduser(os.path.expandvars(path))
b_path = to_bytes(path, errors='surrogate_or_strict')
# if the path is a symlink, and we're following links, get
# the target of the link instead for testing
if params.get('follow', False) and os.path.islink(b_path):
b_path = os.path.realpath(b_path)
path = to_native(b_path)
mode = params.get('mode', None)
owner = params.get('owner', None)
group = params.get('group', None)
# selinux related options
seuser = params.get('seuser', None)
serole = params.get('serole', None)
setype = params.get('setype', None)
selevel = params.get('selevel', None)
secontext = [seuser, serole, setype]
if self.selinux_mls_enabled():
secontext.append(selevel)
default_secontext = self.selinux_default_context(path)
for i in range(len(default_secontext)):
if i is not None and secontext[i] == '_default':
secontext[i] = default_secontext[i]
attributes = params.get('attributes', None)
return dict(
path=path, mode=mode, owner=owner, group=group,
seuser=seuser, serole=serole, setype=setype,
selevel=selevel, secontext=secontext, attributes=attributes,
)
# Detect whether using selinux that is MLS-aware.
# While this means you can set the level/range with
# selinux.lsetfilecon(), it may or may not mean that you
# will get the selevel as part of the context returned
# by selinux.lgetfilecon().
def selinux_mls_enabled(self):
if not HAVE_SELINUX:
return False
if selinux.is_selinux_mls_enabled() == 1:
return True
else:
return False
def selinux_enabled(self):
if not HAVE_SELINUX:
seenabled = self.get_bin_path('selinuxenabled')
if seenabled is not None:
(rc, out, err) = self.run_command(seenabled)
if rc == 0:
self.fail_json(msg="Aborting, target uses selinux but python bindings (libselinux-python) aren't installed!")
return False
if selinux.is_selinux_enabled() == 1:
return True
else:
return False
# Determine whether we need a placeholder for selevel/mls
def selinux_initial_context(self):
context = [None, None, None]
if self.selinux_mls_enabled():
context.append(None)
return context
# If selinux fails to find a default, return an array of None
def selinux_default_context(self, path, mode=0):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.matchpathcon(to_native(path, errors='surrogate_or_strict'), mode)
except OSError:
return context
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def selinux_context(self, path):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.lgetfilecon_raw(to_native(path, errors='surrogate_or_strict'))
except OSError as e:
if e.errno == errno.ENOENT:
self.fail_json(path=path, msg='path %s does not exist' % path)
else:
self.fail_json(path=path, msg='failed to retrieve selinux context')
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def user_and_group(self, path, expand=True):
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
st = os.lstat(b_path)
uid = st.st_uid
gid = st.st_gid
return (uid, gid)
def find_mount_point(self, path):
path_is_bytes = False
if isinstance(path, binary_type):
path_is_bytes = True
b_path = os.path.realpath(to_bytes(os.path.expanduser(os.path.expandvars(path)), errors='surrogate_or_strict'))
while not os.path.ismount(b_path):
b_path = os.path.dirname(b_path)
if path_is_bytes:
return b_path
return to_text(b_path, errors='surrogate_or_strict')
def is_special_selinux_path(self, path):
"""
Returns a tuple containing (True, selinux_context) if the given path is on a
NFS or other 'special' fs mount point, otherwise the return will be (False, None).
"""
try:
f = open('/proc/mounts', 'r')
mount_data = f.readlines()
f.close()
except Exception:
return (False, None)
path_mount_point = self.find_mount_point(path)
for line in mount_data:
(device, mount_point, fstype, options, rest) = line.split(' ', 4)
if path_mount_point == mount_point:
for fs in self._selinux_special_fs:
if fs in fstype:
special_context = self.selinux_context(path_mount_point)
return (True, special_context)
return (False, None)
def set_default_selinux_context(self, path, changed):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
context = self.selinux_default_context(path)
return self.set_context_if_different(path, context, False)
def set_context_if_different(self, path, context, changed, diff=None):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
if self.check_file_absent_if_check_mode(path):
return True
cur_context = self.selinux_context(path)
new_context = list(cur_context)
# Iterate over the current context instead of the
# argument context, which may have selevel.
(is_special_se, sp_context) = self.is_special_selinux_path(path)
if is_special_se:
new_context = sp_context
else:
for i in range(len(cur_context)):
if len(context) > i:
if context[i] is not None and context[i] != cur_context[i]:
new_context[i] = context[i]
elif context[i] is None:
new_context[i] = cur_context[i]
if cur_context != new_context:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['secontext'] = cur_context
if 'after' not in diff:
diff['after'] = {}
diff['after']['secontext'] = new_context
try:
if self.check_mode:
return True
rc = selinux.lsetfilecon(to_native(path), ':'.join(new_context))
except OSError as e:
self.fail_json(path=path, msg='invalid selinux context: %s' % to_native(e),
new_context=new_context, cur_context=cur_context, input_was=context)
if rc != 0:
self.fail_json(path=path, msg='set selinux context failed')
changed = True
return changed
def set_owner_if_different(self, path, owner, changed, diff=None, expand=True):
if owner is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
uid = int(owner)
except ValueError:
try:
uid = pwd.getpwnam(owner).pw_uid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: failed to look up user %s' % owner)
if orig_uid != uid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['owner'] = orig_uid
if 'after' not in diff:
diff['after'] = {}
diff['after']['owner'] = uid
if self.check_mode:
return True
try:
os.lchown(b_path, uid, -1)
except (IOError, OSError) as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: %s' % (to_text(e)))
changed = True
return changed
def set_group_if_different(self, path, group, changed, diff=None, expand=True):
if group is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
gid = int(group)
except ValueError:
try:
gid = grp.getgrnam(group).gr_gid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed: failed to look up group %s' % group)
if orig_gid != gid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['group'] = orig_gid
if 'after' not in diff:
diff['after'] = {}
diff['after']['group'] = gid
if self.check_mode:
return True
try:
os.lchown(b_path, -1, gid)
except OSError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed')
changed = True
return changed
def set_mode_if_different(self, path, mode, changed, diff=None, expand=True):
if mode is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
path_stat = os.lstat(b_path)
if self.check_file_absent_if_check_mode(b_path):
return True
if not isinstance(mode, int):
try:
mode = int(mode, 8)
except Exception:
try:
mode = self._symbolic_mode_to_octal(path_stat, mode)
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path,
msg="mode must be in octal or symbolic form",
details=to_native(e))
if mode != stat.S_IMODE(mode):
# prevent mode from having extra info orbeing invalid long number
path = to_text(b_path)
self.fail_json(path=path, msg="Invalid mode supplied, only permission info is allowed", details=mode)
prev_mode = stat.S_IMODE(path_stat.st_mode)
if prev_mode != mode:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['mode'] = '0%03o' % prev_mode
if 'after' not in diff:
diff['after'] = {}
diff['after']['mode'] = '0%03o' % mode
if self.check_mode:
return True
# FIXME: comparison against string above will cause this to be executed
# every time
try:
if hasattr(os, 'lchmod'):
os.lchmod(b_path, mode)
else:
if not os.path.islink(b_path):
os.chmod(b_path, mode)
else:
# Attempt to set the perms of the symlink but be
# careful not to change the perms of the underlying
# file while trying
underlying_stat = os.stat(b_path)
os.chmod(b_path, mode)
new_underlying_stat = os.stat(b_path)
if underlying_stat.st_mode != new_underlying_stat.st_mode:
os.chmod(b_path, stat.S_IMODE(underlying_stat.st_mode))
except OSError as e:
if os.path.islink(b_path) and e.errno in (errno.EPERM, errno.EROFS): # Can't set mode on symbolic links
pass
elif e.errno in (errno.ENOENT, errno.ELOOP): # Can't set mode on broken symbolic links
pass
else:
raise
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chmod failed', details=to_native(e),
exception=traceback.format_exc())
path_stat = os.lstat(b_path)
new_mode = stat.S_IMODE(path_stat.st_mode)
if new_mode != prev_mode:
changed = True
return changed
def set_attributes_if_different(self, path, attributes, changed, diff=None, expand=True):
if attributes is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
existing = self.get_file_attributes(b_path)
attr_mod = '='
if attributes.startswith(('-', '+')):
attr_mod = attributes[0]
attributes = attributes[1:]
if existing.get('attr_flags', '') != attributes or attr_mod == '-':
attrcmd = self.get_bin_path('chattr')
if attrcmd:
attrcmd = [attrcmd, '%s%s' % (attr_mod, attributes), b_path]
changed = True
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['attributes'] = existing.get('attr_flags')
if 'after' not in diff:
diff['after'] = {}
diff['after']['attributes'] = '%s%s' % (attr_mod, attributes)
if not self.check_mode:
try:
rc, out, err = self.run_command(attrcmd)
if rc != 0 or err:
raise Exception("Error while setting attributes: %s" % (out + err))
except Exception as e:
self.fail_json(path=to_text(b_path), msg='chattr failed',
details=to_native(e), exception=traceback.format_exc())
return changed
def get_file_attributes(self, path):
output = {}
attrcmd = self.get_bin_path('lsattr', False)
if attrcmd:
attrcmd = [attrcmd, '-vd', path]
try:
rc, out, err = self.run_command(attrcmd)
if rc == 0:
res = out.split()
output['attr_flags'] = res[1].replace('-', '').strip()
output['version'] = res[0].strip()
output['attributes'] = format_attributes(output['attr_flags'])
except Exception:
pass
return output
@classmethod
def _symbolic_mode_to_octal(cls, path_stat, symbolic_mode):
"""
This enables symbolic chmod string parsing as stated in the chmod man-page
This includes things like: "u=rw-x+X,g=r-x+X,o=r-x+X"
"""
new_mode = stat.S_IMODE(path_stat.st_mode)
# Now parse all symbolic modes
for mode in symbolic_mode.split(','):
# Per single mode. This always contains a '+', '-' or '='
# Split it on that
permlist = MODE_OPERATOR_RE.split(mode)
# And find all the operators
opers = MODE_OPERATOR_RE.findall(mode)
# The user(s) where it's all about is the first element in the
# 'permlist' list. Take that and remove it from the list.
# An empty user or 'a' means 'all'.
users = permlist.pop(0)
use_umask = (users == '')
if users == 'a' or users == '':
users = 'ugo'
# Check if there are illegal characters in the user list
# They can end up in 'users' because they are not split
if USERS_RE.match(users):
raise ValueError("bad symbolic permission for mode: %s" % mode)
# Now we have two list of equal length, one contains the requested
# permissions and one with the corresponding operators.
for idx, perms in enumerate(permlist):
# Check if there are illegal characters in the permissions
if PERMS_RE.match(perms):
raise ValueError("bad symbolic permission for mode: %s" % mode)
for user in users:
mode_to_apply = cls._get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask)
new_mode = cls._apply_operation_to_mode(user, opers[idx], mode_to_apply, new_mode)
return new_mode
@staticmethod
def _apply_operation_to_mode(user, operator, mode_to_apply, current_mode):
if operator == '=':
if user == 'u':
mask = stat.S_IRWXU | stat.S_ISUID
elif user == 'g':
mask = stat.S_IRWXG | stat.S_ISGID
elif user == 'o':
mask = stat.S_IRWXO | stat.S_ISVTX
# mask out u, g, or o permissions from current_mode and apply new permissions
inverse_mask = mask ^ PERM_BITS
new_mode = (current_mode & inverse_mask) | mode_to_apply
elif operator == '+':
new_mode = current_mode | mode_to_apply
elif operator == '-':
new_mode = current_mode - (current_mode & mode_to_apply)
return new_mode
@staticmethod
def _get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask):
prev_mode = stat.S_IMODE(path_stat.st_mode)
is_directory = stat.S_ISDIR(path_stat.st_mode)
has_x_permissions = (prev_mode & EXEC_PERM_BITS) > 0
apply_X_permission = is_directory or has_x_permissions
# Get the umask, if the 'user' part is empty, the effect is as if (a) were
# given, but bits that are set in the umask are not affected.
# We also need the "reversed umask" for masking
umask = os.umask(0)
os.umask(umask)
rev_umask = umask ^ PERM_BITS
# Permission bits constants documented at:
# http://docs.python.org/2/library/stat.html#stat.S_ISUID
if apply_X_permission:
X_perms = {
'u': {'X': stat.S_IXUSR},
'g': {'X': stat.S_IXGRP},
'o': {'X': stat.S_IXOTH},
}
else:
X_perms = {
'u': {'X': 0},
'g': {'X': 0},
'o': {'X': 0},
}
user_perms_to_modes = {
'u': {
'r': rev_umask & stat.S_IRUSR if use_umask else stat.S_IRUSR,
'w': rev_umask & stat.S_IWUSR if use_umask else stat.S_IWUSR,
'x': rev_umask & stat.S_IXUSR if use_umask else stat.S_IXUSR,
's': stat.S_ISUID,
't': 0,
'u': prev_mode & stat.S_IRWXU,
'g': (prev_mode & stat.S_IRWXG) << 3,
'o': (prev_mode & stat.S_IRWXO) << 6},
'g': {
'r': rev_umask & stat.S_IRGRP if use_umask else stat.S_IRGRP,
'w': rev_umask & stat.S_IWGRP if use_umask else stat.S_IWGRP,
'x': rev_umask & stat.S_IXGRP if use_umask else stat.S_IXGRP,
's': stat.S_ISGID,
't': 0,
'u': (prev_mode & stat.S_IRWXU) >> 3,
'g': prev_mode & stat.S_IRWXG,
'o': (prev_mode & stat.S_IRWXO) << 3},
'o': {
'r': rev_umask & stat.S_IROTH if use_umask else stat.S_IROTH,
'w': rev_umask & stat.S_IWOTH if use_umask else stat.S_IWOTH,
'x': rev_umask & stat.S_IXOTH if use_umask else stat.S_IXOTH,
's': 0,
't': stat.S_ISVTX,
'u': (prev_mode & stat.S_IRWXU) >> 6,
'g': (prev_mode & stat.S_IRWXG) >> 3,
'o': prev_mode & stat.S_IRWXO},
}
# Insert X_perms into user_perms_to_modes
for key, value in X_perms.items():
user_perms_to_modes[key].update(value)
def or_reduce(mode, perm):
return mode | user_perms_to_modes[user][perm]
return reduce(or_reduce, perms, 0)
def set_fs_attributes_if_different(self, file_args, changed, diff=None, expand=True):
# set modes owners and context as needed
changed = self.set_context_if_different(
file_args['path'], file_args['secontext'], changed, diff
)
changed = self.set_owner_if_different(
file_args['path'], file_args['owner'], changed, diff, expand
)
changed = self.set_group_if_different(
file_args['path'], file_args['group'], changed, diff, expand
)
changed = self.set_mode_if_different(
file_args['path'], file_args['mode'], changed, diff, expand
)
changed = self.set_attributes_if_different(
file_args['path'], file_args['attributes'], changed, diff, expand
)
return changed
def check_file_absent_if_check_mode(self, file_path):
return self.check_mode and not os.path.exists(file_path)
def set_directory_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def set_file_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def add_path_info(self, kwargs):
'''
for results that are files, supplement the info about the file
in the return path with stats about the file path.
'''
path = kwargs.get('path', kwargs.get('dest', None))
if path is None:
return kwargs
b_path = to_bytes(path, errors='surrogate_or_strict')
if os.path.exists(b_path):
(uid, gid) = self.user_and_group(path)
kwargs['uid'] = uid
kwargs['gid'] = gid
try:
user = pwd.getpwuid(uid)[0]
except KeyError:
user = str(uid)
try:
group = grp.getgrgid(gid)[0]
except KeyError:
group = str(gid)
kwargs['owner'] = user
kwargs['group'] = group
st = os.lstat(b_path)
kwargs['mode'] = '0%03o' % stat.S_IMODE(st[stat.ST_MODE])
# secontext not yet supported
if os.path.islink(b_path):
kwargs['state'] = 'link'
elif os.path.isdir(b_path):
kwargs['state'] = 'directory'
elif os.stat(b_path).st_nlink > 1:
kwargs['state'] = 'hard'
else:
kwargs['state'] = 'file'
if HAVE_SELINUX and self.selinux_enabled():
kwargs['secontext'] = ':'.join(self.selinux_context(path))
kwargs['size'] = st[stat.ST_SIZE]
return kwargs
def _check_locale(self):
'''
Uses the locale module to test the currently set locale
(per the LANG and LC_CTYPE environment settings)
'''
try:
# setting the locale to '' uses the default locale
# as it would be returned by locale.getdefaultlocale()
locale.setlocale(locale.LC_ALL, '')
except locale.Error:
# fallback to the 'C' locale, which may cause unicode
# issues but is preferable to simply failing because
# of an unknown locale
locale.setlocale(locale.LC_ALL, 'C')
os.environ['LANG'] = 'C'
os.environ['LC_ALL'] = 'C'
os.environ['LC_MESSAGES'] = 'C'
except Exception as e:
self.fail_json(msg="An unknown error was encountered while attempting to validate the locale: %s" %
to_native(e), exception=traceback.format_exc())
def _handle_aliases(self, spec=None, param=None, option_prefix=''):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
# this uses exceptions as it happens before we can safely call fail_json
alias_warnings = []
alias_results, self._legal_inputs = handle_aliases(spec, param, alias_warnings=alias_warnings)
for option, alias in alias_warnings:
warn('Both option %s and its alias %s are set.' % (option_prefix + option, option_prefix + alias))
deprecated_aliases = []
for i in spec.keys():
if 'deprecated_aliases' in spec[i].keys():
for alias in spec[i]['deprecated_aliases']:
deprecated_aliases.append(alias)
for deprecation in deprecated_aliases:
if deprecation['name'] in param.keys():
deprecate("Alias '%s' is deprecated. See the module docs for more information" % deprecation['name'], deprecation['version'])
return alias_results
def _handle_no_log_values(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
try:
self.no_log_values.update(list_no_log_values(spec, param))
except TypeError as te:
self.fail_json(msg="Failure when processing no_log parameters. Module invocation will be hidden. "
"%s" % to_native(te), invocation={'module_args': 'HIDDEN DUE TO FAILURE'})
for message in list_deprecations(spec, param):
deprecate(message['msg'], message['version'])
def _check_arguments(self, spec=None, param=None, legal_inputs=None):
self._syslog_facility = 'LOG_USER'
unsupported_parameters = set()
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
if legal_inputs is None:
legal_inputs = self._legal_inputs
for k in list(param.keys()):
if k not in legal_inputs:
unsupported_parameters.add(k)
for k in PASS_VARS:
# handle setting internal properties from internal ansible vars
param_key = '_ansible_%s' % k
if param_key in param:
if k in PASS_BOOLS:
setattr(self, PASS_VARS[k][0], self.boolean(param[param_key]))
else:
setattr(self, PASS_VARS[k][0], param[param_key])
# clean up internal top level params:
if param_key in self.params:
del self.params[param_key]
else:
# use defaults if not already set
if not hasattr(self, PASS_VARS[k][0]):
setattr(self, PASS_VARS[k][0], PASS_VARS[k][1])
if unsupported_parameters:
msg = "Unsupported parameters for (%s) module: %s" % (self._name, ', '.join(sorted(list(unsupported_parameters))))
if self._options_context:
msg += " found in %s." % " -> ".join(self._options_context)
msg += " Supported parameters include: %s" % (', '.join(sorted(spec.keys())))
self.fail_json(msg=msg)
if self.check_mode and not self.supports_check_mode:
self.exit_json(skipped=True, msg="remote module (%s) does not support check mode" % self._name)
def _count_terms(self, check, param=None):
if param is None:
param = self.params
return count_terms(check, param)
def _check_mutually_exclusive(self, spec, param=None):
if param is None:
param = self.params
try:
check_mutually_exclusive(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_one_of(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_one_of(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_together(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_together(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_by(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_by(spec, param)
except TypeError as e:
self.fail_json(msg=to_native(e))
def _check_required_arguments(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
try:
check_required_arguments(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_if(self, spec, param=None):
''' ensure that parameters which conditionally required are present '''
if spec is None:
return
if param is None:
param = self.params
try:
check_required_if(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_argument_values(self, spec=None, param=None):
''' ensure all arguments have the requested values, and there are no stray arguments '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
choices = v.get('choices', None)
if choices is None:
continue
if isinstance(choices, SEQUENCETYPE) and not isinstance(choices, (binary_type, text_type)):
if k in param:
# Allow one or more when type='list' param with choices
if isinstance(param[k], list):
diff_list = ", ".join([item for item in param[k] if item not in choices])
if diff_list:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one or more of: %s. Got no match for: %s" % (k, choices_str, diff_list)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
elif param[k] not in choices:
# PyYaml converts certain strings to bools. If we can unambiguously convert back, do so before checking
# the value. If we can't figure this out, module author is responsible.
lowered_choices = None
if param[k] == 'False':
lowered_choices = lenient_lowercase(choices)
overlap = BOOLEANS_FALSE.intersection(choices)
if len(overlap) == 1:
# Extract from a set
(param[k],) = overlap
if param[k] == 'True':
if lowered_choices is None:
lowered_choices = lenient_lowercase(choices)
overlap = BOOLEANS_TRUE.intersection(choices)
if len(overlap) == 1:
(param[k],) = overlap
if param[k] not in choices:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one of: %s, got: %s" % (k, choices_str, param[k])
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
else:
msg = "internal error: choices for argument %s are not iterable: %s" % (k, choices)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def safe_eval(self, value, locals=None, include_exceptions=False):
return safe_eval(value, locals, include_exceptions)
def _check_type_str(self, value):
opts = {
'error': False,
'warn': False,
'ignore': True
}
# Ignore, warn, or error when converting to a string.
allow_conversion = opts.get(self._string_conversion_action, True)
try:
return check_type_str(value, allow_conversion)
except TypeError:
common_msg = 'quote the entire value to ensure it does not change.'
if self._string_conversion_action == 'error':
msg = common_msg.capitalize()
raise TypeError(to_native(msg))
elif self._string_conversion_action == 'warn':
msg = ('The value {0!r} (type {0.__class__.__name__}) in a string field was converted to {1!r} (type string). '
'If this does not look like what you expect, {2}').format(value, to_text(value), common_msg)
self.warn(to_native(msg))
return to_native(value, errors='surrogate_or_strict')
def _check_type_list(self, value):
return check_type_list(value)
def _check_type_dict(self, value):
return check_type_dict(value)
def _check_type_bool(self, value):
return check_type_bool(value)
def _check_type_int(self, value):
return check_type_int(value)
def _check_type_float(self, value):
return check_type_float(value)
def _check_type_path(self, value):
return check_type_path(value)
def _check_type_jsonarg(self, value):
return check_type_jsonarg(value)
def _check_type_raw(self, value):
return check_type_raw(value)
def _check_type_bytes(self, value):
return check_type_bytes(value)
def _check_type_bits(self, value):
return check_type_bits(value)
def _handle_options(self, argument_spec=None, params=None, prefix=''):
''' deal with options to create sub spec '''
if argument_spec is None:
argument_spec = self.argument_spec
if params is None:
params = self.params
for (k, v) in argument_spec.items():
wanted = v.get('type', None)
if wanted == 'dict' or (wanted == 'list' and v.get('elements', '') == 'dict'):
spec = v.get('options', None)
if v.get('apply_defaults', False):
if spec is not None:
if params.get(k) is None:
params[k] = {}
else:
continue
elif spec is None or k not in params or params[k] is None:
continue
self._options_context.append(k)
if isinstance(params[k], dict):
elements = [params[k]]
else:
elements = params[k]
for idx, param in enumerate(elements):
if not isinstance(param, dict):
self.fail_json(msg="value of %s must be of type dict or list of dict" % k)
new_prefix = prefix + k
if wanted == 'list':
new_prefix += '[%d]' % idx
new_prefix += '.'
self._set_fallbacks(spec, param)
options_aliases = self._handle_aliases(spec, param, option_prefix=new_prefix)
options_legal_inputs = list(spec.keys()) + list(options_aliases.keys())
self._check_arguments(spec, param, options_legal_inputs)
# check exclusive early
if not self.bypass_checks:
self._check_mutually_exclusive(v.get('mutually_exclusive', None), param)
self._set_defaults(pre=True, spec=spec, param=param)
if not self.bypass_checks:
self._check_required_arguments(spec, param)
self._check_argument_types(spec, param)
self._check_argument_values(spec, param)
self._check_required_together(v.get('required_together', None), param)
self._check_required_one_of(v.get('required_one_of', None), param)
self._check_required_if(v.get('required_if', None), param)
self._check_required_by(v.get('required_by', None), param)
self._set_defaults(pre=False, spec=spec, param=param)
# handle multi level options (sub argspec)
self._handle_options(spec, param, new_prefix)
self._options_context.pop()
def _get_wanted_type(self, wanted, k):
if not callable(wanted):
if wanted is None:
# Mostly we want to default to str.
# For values set to None explicitly, return None instead as
# that allows a user to unset a parameter
wanted = 'str'
try:
type_checker = self._CHECK_ARGUMENT_TYPES_DISPATCHER[wanted]
except KeyError:
self.fail_json(msg="implementation error: unknown type %s requested for %s" % (wanted, k))
else:
# set the type_checker to the callable, and reset wanted to the callable's name (or type if it doesn't have one, ala MagicMock)
type_checker = wanted
wanted = getattr(wanted, '__name__', to_native(type(wanted)))
return type_checker, wanted
def _handle_elements(self, wanted, param, values):
type_checker, wanted_name = self._get_wanted_type(wanted, param)
validated_params = []
for value in values:
try:
validated_params.append(type_checker(value))
except (TypeError, ValueError) as e:
msg = "Elements value for option %s" % param
if self._options_context:
msg += " found in '%s'" % " -> ".join(self._options_context)
msg += " is of type %s and we were unable to convert to %s: %s" % (type(value), wanted_name, to_native(e))
self.fail_json(msg=msg)
return validated_params
def _check_argument_types(self, spec=None, param=None):
''' ensure all arguments have the requested type '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
wanted = v.get('type', None)
if k not in param:
continue
value = param[k]
if value is None:
continue
type_checker, wanted_name = self._get_wanted_type(wanted, k)
try:
param[k] = type_checker(value)
wanted_elements = v.get('elements', None)
if wanted_elements:
if wanted != 'list' or not isinstance(param[k], list):
msg = "Invalid type %s for option '%s'" % (wanted_name, param)
if self._options_context:
msg += " found in '%s'." % " -> ".join(self._options_context)
msg += ", elements value check is supported only with 'list' type"
self.fail_json(msg=msg)
param[k] = self._handle_elements(wanted_elements, k, param[k])
except (TypeError, ValueError) as e:
msg = "argument %s is of type %s" % (k, type(value))
if self._options_context:
msg += " found in '%s'." % " -> ".join(self._options_context)
msg += " and we were unable to convert to %s: %s" % (wanted_name, to_native(e))
self.fail_json(msg=msg)
def _set_defaults(self, pre=True, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
default = v.get('default', None)
if pre is True:
# this prevents setting defaults on required items
if default is not None and k not in param:
param[k] = default
else:
# make sure things without a default still get set None
if k not in param:
param[k] = default
def _set_fallbacks(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
fallback = v.get('fallback', (None,))
fallback_strategy = fallback[0]
fallback_args = []
fallback_kwargs = {}
if k not in param and fallback_strategy is not None:
for item in fallback[1:]:
if isinstance(item, dict):
fallback_kwargs = item
else:
fallback_args = item
try:
param[k] = fallback_strategy(*fallback_args, **fallback_kwargs)
except AnsibleFallbackNotFound:
continue
def _load_params(self):
''' read the input and set the params attribute.
This method is for backwards compatibility. The guts of the function
were moved out in 2.1 so that custom modules could read the parameters.
'''
# debug overrides to read args from file or cmdline
self.params = _load_params()
def _log_to_syslog(self, msg):
if HAS_SYSLOG:
module = 'ansible-%s' % self._name
facility = getattr(syslog, self._syslog_facility, syslog.LOG_USER)
syslog.openlog(str(module), 0, facility)
syslog.syslog(syslog.LOG_INFO, msg)
def debug(self, msg):
if self._debug:
self.log('[debug] %s' % msg)
def log(self, msg, log_args=None):
if not self.no_log:
if log_args is None:
log_args = dict()
module = 'ansible-%s' % self._name
if isinstance(module, binary_type):
module = module.decode('utf-8', 'replace')
# 6655 - allow for accented characters
if not isinstance(msg, (binary_type, text_type)):
raise TypeError("msg should be a string (got %s)" % type(msg))
# We want journal to always take text type
# syslog takes bytes on py2, text type on py3
if isinstance(msg, binary_type):
journal_msg = remove_values(msg.decode('utf-8', 'replace'), self.no_log_values)
else:
# TODO: surrogateescape is a danger here on Py3
journal_msg = remove_values(msg, self.no_log_values)
if PY3:
syslog_msg = journal_msg
else:
syslog_msg = journal_msg.encode('utf-8', 'replace')
if has_journal:
journal_args = [("MODULE", os.path.basename(__file__))]
for arg in log_args:
journal_args.append((arg.upper(), str(log_args[arg])))
try:
if HAS_SYSLOG:
# If syslog_facility specified, it needs to convert
# from the facility name to the facility code, and
# set it as SYSLOG_FACILITY argument of journal.send()
facility = getattr(syslog,
self._syslog_facility,
syslog.LOG_USER) >> 3
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
SYSLOG_FACILITY=facility,
**dict(journal_args))
else:
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
**dict(journal_args))
except IOError:
# fall back to syslog since logging to journal failed
self._log_to_syslog(syslog_msg)
else:
self._log_to_syslog(syslog_msg)
def _log_invocation(self):
''' log that ansible ran the module '''
# TODO: generalize a separate log function and make log_invocation use it
# Sanitize possible password argument when logging.
log_args = dict()
for param in self.params:
canon = self.aliases.get(param, param)
arg_opts = self.argument_spec.get(canon, {})
no_log = arg_opts.get('no_log', None)
# try to proactively capture password/passphrase fields
if no_log is None and PASSWORD_MATCH.search(param):
log_args[param] = 'NOT_LOGGING_PASSWORD'
self.warn('Module did not set no_log for %s' % param)
elif self.boolean(no_log):
log_args[param] = 'NOT_LOGGING_PARAMETER'
else:
param_val = self.params[param]
if not isinstance(param_val, (text_type, binary_type)):
param_val = str(param_val)
elif isinstance(param_val, text_type):
param_val = param_val.encode('utf-8')
log_args[param] = heuristic_log_sanitize(param_val, self.no_log_values)
msg = ['%s=%s' % (to_native(arg), to_native(val)) for arg, val in log_args.items()]
if msg:
msg = 'Invoked with %s' % ' '.join(msg)
else:
msg = 'Invoked'
self.log(msg, log_args=log_args)
def _set_cwd(self):
try:
cwd = os.getcwd()
if not os.access(cwd, os.F_OK | os.R_OK):
raise Exception()
return cwd
except Exception:
# we don't have access to the cwd, probably because of sudo.
# Try and move to a neutral location to prevent errors
for cwd in [self.tmpdir, os.path.expandvars('$HOME'), tempfile.gettempdir()]:
try:
if os.access(cwd, os.F_OK | os.R_OK):
os.chdir(cwd)
return cwd
except Exception:
pass
# we won't error here, as it may *not* be a problem,
# and we don't want to break modules unnecessarily
return None
def get_bin_path(self, arg, required=False, opt_dirs=None):
'''
Find system executable in PATH.
:param arg: The executable to find.
:param required: if executable is not found and required is ``True``, fail_json
:param opt_dirs: optional list of directories to search in addition to ``PATH``
:returns: if found return full path; otherwise return None
'''
bin_path = None
try:
bin_path = get_bin_path(arg=arg, opt_dirs=opt_dirs)
except ValueError as e:
if required:
self.fail_json(msg=to_text(e))
else:
return bin_path
return bin_path
def boolean(self, arg):
'''Convert the argument to a boolean'''
if arg is None:
return arg
try:
return boolean(arg)
except TypeError as e:
self.fail_json(msg=to_native(e))
def jsonify(self, data):
try:
return jsonify(data)
except UnicodeError as e:
self.fail_json(msg=to_text(e))
def from_json(self, data):
return json.loads(data)
def add_cleanup_file(self, path):
if path not in self.cleanup_files:
self.cleanup_files.append(path)
def do_cleanup_files(self):
for path in self.cleanup_files:
self.cleanup(path)
def _return_formatted(self, kwargs):
self.add_path_info(kwargs)
if 'invocation' not in kwargs:
kwargs['invocation'] = {'module_args': self.params}
if 'warnings' in kwargs:
if isinstance(kwargs['warnings'], list):
for w in kwargs['warnings']:
self.warn(w)
else:
self.warn(kwargs['warnings'])
warnings = get_warning_messages()
if warnings:
kwargs['warnings'] = warnings
if 'deprecations' in kwargs:
if isinstance(kwargs['deprecations'], list):
for d in kwargs['deprecations']:
if isinstance(d, SEQUENCETYPE) and len(d) == 2:
self.deprecate(d[0], version=d[1])
elif isinstance(d, Mapping):
self.deprecate(d['msg'], version=d.get('version', None))
else:
self.deprecate(d) # pylint: disable=ansible-deprecated-no-version
else:
self.deprecate(kwargs['deprecations']) # pylint: disable=ansible-deprecated-no-version
deprecations = get_deprecation_messages()
if deprecations:
kwargs['deprecations'] = deprecations
kwargs = remove_values(kwargs, self.no_log_values)
print('\n%s' % self.jsonify(kwargs))
def exit_json(self, **kwargs):
''' return from the module, without error '''
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(0)
def fail_json(self, msg, **kwargs):
''' return from the module, with an error message '''
kwargs['failed'] = True
kwargs['msg'] = msg
# Add traceback if debug or high verbosity and it is missing
# NOTE: Badly named as exception, it really always has been a traceback
if 'exception' not in kwargs and sys.exc_info()[2] and (self._debug or self._verbosity >= 3):
if PY2:
# On Python 2 this is the last (stack frame) exception and as such may be unrelated to the failure
kwargs['exception'] = 'WARNING: The below traceback may *not* be related to the actual failure.\n' +\
''.join(traceback.format_tb(sys.exc_info()[2]))
else:
kwargs['exception'] = ''.join(traceback.format_tb(sys.exc_info()[2]))
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(1)
def fail_on_missing_params(self, required_params=None):
if not required_params:
return
try:
check_missing_parameters(self.params, required_params)
except TypeError as e:
self.fail_json(msg=to_native(e))
def digest_from_file(self, filename, algorithm):
''' Return hex digest of local file for a digest_method specified by name, or None if file is not present. '''
b_filename = to_bytes(filename, errors='surrogate_or_strict')
if not os.path.exists(b_filename):
return None
if os.path.isdir(b_filename):
self.fail_json(msg="attempted to take checksum of directory: %s" % filename)
# preserve old behaviour where the third parameter was a hash algorithm object
if hasattr(algorithm, 'hexdigest'):
digest_method = algorithm
else:
try:
digest_method = AVAILABLE_HASH_ALGORITHMS[algorithm]()
except KeyError:
self.fail_json(msg="Could not hash file '%s' with algorithm '%s'. Available algorithms: %s" %
(filename, algorithm, ', '.join(AVAILABLE_HASH_ALGORITHMS)))
blocksize = 64 * 1024
infile = open(os.path.realpath(b_filename), 'rb')
block = infile.read(blocksize)
while block:
digest_method.update(block)
block = infile.read(blocksize)
infile.close()
return digest_method.hexdigest()
def md5(self, filename):
''' Return MD5 hex digest of local file using digest_from_file().
Do not use this function unless you have no other choice for:
1) Optional backwards compatibility
2) Compatibility with a third party protocol
This function will not work on systems complying with FIPS-140-2.
Most uses of this function can use the module.sha1 function instead.
'''
if 'md5' not in AVAILABLE_HASH_ALGORITHMS:
raise ValueError('MD5 not available. Possibly running in FIPS mode')
return self.digest_from_file(filename, 'md5')
def sha1(self, filename):
''' Return SHA1 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha1')
def sha256(self, filename):
''' Return SHA-256 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha256')
def backup_local(self, fn):
'''make a date-marked backup of the specified file, return True or False on success or failure'''
backupdest = ''
if os.path.exists(fn):
# backups named basename.PID.YYYY-MM-DD@HH:MM:SS~
ext = time.strftime("%Y-%m-%d@%H:%M:%S~", time.localtime(time.time()))
backupdest = '%s.%s.%s' % (fn, os.getpid(), ext)
try:
self.preserved_copy(fn, backupdest)
except (shutil.Error, IOError) as e:
self.fail_json(msg='Could not make backup of %s to %s: %s' % (fn, backupdest, to_native(e)))
return backupdest
def cleanup(self, tmpfile):
if os.path.exists(tmpfile):
try:
os.unlink(tmpfile)
except OSError as e:
sys.stderr.write("could not cleanup %s: %s" % (tmpfile, to_native(e)))
def preserved_copy(self, src, dest):
"""Copy a file with preserved ownership, permissions and context"""
# shutil.copy2(src, dst)
# Similar to shutil.copy(), but metadata is copied as well - in fact,
# this is just shutil.copy() followed by copystat(). This is similar
# to the Unix command cp -p.
#
# shutil.copystat(src, dst)
# Copy the permission bits, last access time, last modification time,
# and flags from src to dst. The file contents, owner, and group are
# unaffected. src and dst are path names given as strings.
shutil.copy2(src, dest)
# Set the context
if self.selinux_enabled():
context = self.selinux_context(src)
self.set_context_if_different(dest, context, False)
# chown it
try:
dest_stat = os.stat(src)
tmp_stat = os.stat(dest)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(dest, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
# Set the attributes
current_attribs = self.get_file_attributes(src)
current_attribs = current_attribs.get('attr_flags', '')
self.set_attributes_if_different(dest, current_attribs, True)
def atomic_move(self, src, dest, unsafe_writes=False):
'''atomically move src to dest, copying attributes from dest, returns true on success
it uses os.rename to ensure this as it is an atomic operation, rest of the function is
to work around limitations, corner cases and ensure selinux context is saved if possible'''
context = None
dest_stat = None
b_src = to_bytes(src, errors='surrogate_or_strict')
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.exists(b_dest):
try:
dest_stat = os.stat(b_dest)
# copy mode and ownership
os.chmod(b_src, dest_stat.st_mode & PERM_BITS)
os.chown(b_src, dest_stat.st_uid, dest_stat.st_gid)
# try to copy flags if possible
if hasattr(os, 'chflags') and hasattr(dest_stat, 'st_flags'):
try:
os.chflags(b_src, dest_stat.st_flags)
except OSError as e:
for err in 'EOPNOTSUPP', 'ENOTSUP':
if hasattr(errno, err) and e.errno == getattr(errno, err):
break
else:
raise
except OSError as e:
if e.errno != errno.EPERM:
raise
if self.selinux_enabled():
context = self.selinux_context(dest)
else:
if self.selinux_enabled():
context = self.selinux_default_context(dest)
creating = not os.path.exists(b_dest)
try:
# Optimistically try a rename, solves some corner cases and can avoid useless work, throws exception if not atomic.
os.rename(b_src, b_dest)
except (IOError, OSError) as e:
if e.errno not in [errno.EPERM, errno.EXDEV, errno.EACCES, errno.ETXTBSY, errno.EBUSY]:
# only try workarounds for errno 18 (cross device), 1 (not permitted), 13 (permission denied)
# and 26 (text file busy) which happens on vagrant synced folders and other 'exotic' non posix file systems
self.fail_json(msg='Could not replace file: %s to %s: %s' % (src, dest, to_native(e)),
exception=traceback.format_exc())
else:
# Use bytes here. In the shippable CI, this fails with
# a UnicodeError with surrogateescape'd strings for an unknown
# reason (doesn't happen in a local Ubuntu16.04 VM)
b_dest_dir = os.path.dirname(b_dest)
b_suffix = os.path.basename(b_dest)
error_msg = None
tmp_dest_name = None
try:
tmp_dest_fd, tmp_dest_name = tempfile.mkstemp(prefix=b'.ansible_tmp',
dir=b_dest_dir, suffix=b_suffix)
except (OSError, IOError) as e:
error_msg = 'The destination directory (%s) is not writable by the current user. Error was: %s' % (os.path.dirname(dest), to_native(e))
except TypeError:
# We expect that this is happening because python3.4.x and
# below can't handle byte strings in mkstemp(). Traceback
# would end in something like:
# file = _os.path.join(dir, pre + name + suf)
# TypeError: can't concat bytes to str
error_msg = ('Failed creating tmp file for atomic move. This usually happens when using Python3 less than Python3.5. '
'Please use Python2.x or Python3.5 or greater.')
finally:
if error_msg:
if unsafe_writes:
self._unsafe_writes(b_src, b_dest)
else:
self.fail_json(msg=error_msg, exception=traceback.format_exc())
if tmp_dest_name:
b_tmp_dest_name = to_bytes(tmp_dest_name, errors='surrogate_or_strict')
try:
try:
# close tmp file handle before file operations to prevent text file busy errors on vboxfs synced folders (windows host)
os.close(tmp_dest_fd)
# leaves tmp file behind when sudo and not root
try:
shutil.move(b_src, b_tmp_dest_name)
except OSError:
# cleanup will happen by 'rm' of tmpdir
# copy2 will preserve some metadata
shutil.copy2(b_src, b_tmp_dest_name)
if self.selinux_enabled():
self.set_context_if_different(
b_tmp_dest_name, context, False)
try:
tmp_stat = os.stat(b_tmp_dest_name)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(b_tmp_dest_name, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
try:
os.rename(b_tmp_dest_name, b_dest)
except (shutil.Error, OSError, IOError) as e:
if unsafe_writes and e.errno == errno.EBUSY:
self._unsafe_writes(b_tmp_dest_name, b_dest)
else:
self.fail_json(msg='Unable to make %s into to %s, failed final rename from %s: %s' %
(src, dest, b_tmp_dest_name, to_native(e)),
exception=traceback.format_exc())
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Failed to replace file: %s to %s: %s' % (src, dest, to_native(e)),
exception=traceback.format_exc())
finally:
self.cleanup(b_tmp_dest_name)
if creating:
# make sure the file has the correct permissions
# based on the current value of umask
umask = os.umask(0)
os.umask(umask)
os.chmod(b_dest, DEFAULT_PERM & ~umask)
try:
os.chown(b_dest, os.geteuid(), os.getegid())
except OSError:
# We're okay with trying our best here. If the user is not
# root (or old Unices) they won't be able to chown.
pass
if self.selinux_enabled():
# rename might not preserve context
self.set_context_if_different(dest, context, False)
def _unsafe_writes(self, src, dest):
# sadly there are some situations where we cannot ensure atomicity, but only if
# the user insists and we get the appropriate error we update the file unsafely
try:
out_dest = in_src = None
try:
out_dest = open(dest, 'wb')
in_src = open(src, 'rb')
shutil.copyfileobj(in_src, out_dest)
finally: # assuring closed files in 2.4 compatible way
if out_dest:
out_dest.close()
if in_src:
in_src.close()
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Could not write data to file (%s) from (%s): %s' % (dest, src, to_native(e)),
exception=traceback.format_exc())
def _read_from_pipes(self, rpipes, rfds, file_descriptor):
data = b('')
if file_descriptor in rfds:
data = os.read(file_descriptor.fileno(), self.get_buffer_size(file_descriptor))
if data == b(''):
rpipes.remove(file_descriptor)
return data
def _clean_args(self, args):
if not self._clean:
# create a printable version of the command for use in reporting later,
# which strips out things like passwords from the args list
to_clean_args = args
if PY2:
if isinstance(args, text_type):
to_clean_args = to_bytes(args)
else:
if isinstance(args, binary_type):
to_clean_args = to_text(args)
if isinstance(args, (text_type, binary_type)):
to_clean_args = shlex.split(to_clean_args)
clean_args = []
is_passwd = False
for arg in (to_native(a) for a in to_clean_args):
if is_passwd:
is_passwd = False
clean_args.append('********')
continue
if PASSWD_ARG_RE.match(arg):
sep_idx = arg.find('=')
if sep_idx > -1:
clean_args.append('%s=********' % arg[:sep_idx])
continue
else:
is_passwd = True
arg = heuristic_log_sanitize(arg, self.no_log_values)
clean_args.append(arg)
self._clean = ' '.join(shlex_quote(arg) for arg in clean_args)
return self._clean
def _restore_signal_handlers(self):
# Reset SIGPIPE to SIG_DFL, otherwise in Python2.7 it gets ignored in subprocesses.
if PY2 and sys.platform != 'win32':
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None,
use_unsafe_shell=False, prompt_regex=None, environ_update=None, umask=None, encoding='utf-8', errors='surrogate_or_strict',
expand_user_and_vars=True, pass_fds=None, before_communicate_callback=None):
'''
Execute a command, returns rc, stdout, and stderr.
:arg args: is the command to run
* If args is a list, the command will be run with shell=False.
* If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False
* If args is a string and use_unsafe_shell=True it runs with shell=True.
:kw check_rc: Whether to call fail_json in case of non zero RC.
Default False
:kw close_fds: See documentation for subprocess.Popen(). Default True
:kw executable: See documentation for subprocess.Popen(). Default None
:kw data: If given, information to write to the stdin of the command
:kw binary_data: If False, append a newline to the data. Default False
:kw path_prefix: If given, additional path to find the command in.
This adds to the PATH environment variable so helper commands in
the same directory can also be found
:kw cwd: If given, working directory to run the command inside
:kw use_unsafe_shell: See `args` parameter. Default False
:kw prompt_regex: Regex string (not a compiled regex) which can be
used to detect prompts in the stdout which would otherwise cause
the execution to hang (especially if no input data is specified)
:kw environ_update: dictionary to *update* os.environ with
:kw umask: Umask to be used when running the command. Default None
:kw encoding: Since we return native strings, on python3 we need to
know the encoding to use to transform from bytes to text. If you
want to always get bytes back, use encoding=None. The default is
"utf-8". This does not affect transformation of strings given as
args.
:kw errors: Since we return native strings, on python3 we need to
transform stdout and stderr from bytes to text. If the bytes are
undecodable in the ``encoding`` specified, then use this error
handler to deal with them. The default is ``surrogate_or_strict``
which means that the bytes will be decoded using the
surrogateescape error handler if available (available on all
python3 versions we support) otherwise a UnicodeError traceback
will be raised. This does not affect transformations of strings
given as args.
:kw expand_user_and_vars: When ``use_unsafe_shell=False`` this argument
dictates whether ``~`` is expanded in paths and environment variables
are expanded before running the command. When ``True`` a string such as
``$SHELL`` will be expanded regardless of escaping. When ``False`` and
``use_unsafe_shell=False`` no path or variable expansion will be done.
:kw pass_fds: When running on Python 3 this argument
dictates which file descriptors should be passed
to an underlying ``Popen`` constructor. On Python 2, this will
set ``close_fds`` to False.
:kw before_communicate_callback: This function will be called
after ``Popen`` object will be created
but before communicating to the process.
(``Popen`` object will be passed to callback as a first argument)
:returns: A 3-tuple of return code (integer), stdout (native string),
and stderr (native string). On python2, stdout and stderr are both
byte strings. On python3, stdout and stderr are text strings converted
according to the encoding and errors parameters. If you want byte
strings on python3, use encoding=None to turn decoding to text off.
'''
# used by clean args later on
self._clean = None
if not isinstance(args, (list, binary_type, text_type)):
msg = "Argument 'args' to run_command must be list or string"
self.fail_json(rc=257, cmd=args, msg=msg)
shell = False
if use_unsafe_shell:
# stringify args for unsafe/direct shell usage
if isinstance(args, list):
args = b" ".join([to_bytes(shlex_quote(x), errors='surrogate_or_strict') for x in args])
else:
args = to_bytes(args, errors='surrogate_or_strict')
# not set explicitly, check if set by controller
if executable:
executable = to_bytes(executable, errors='surrogate_or_strict')
args = [executable, b'-c', args]
elif self._shell not in (None, '/bin/sh'):
args = [to_bytes(self._shell, errors='surrogate_or_strict'), b'-c', args]
else:
shell = True
else:
# ensure args are a list
if isinstance(args, (binary_type, text_type)):
# On python2.6 and below, shlex has problems with text type
# On python3, shlex needs a text type.
if PY2:
args = to_bytes(args, errors='surrogate_or_strict')
elif PY3:
args = to_text(args, errors='surrogateescape')
args = shlex.split(args)
# expand ``~`` in paths, and all environment vars
if expand_user_and_vars:
args = [to_bytes(os.path.expanduser(os.path.expandvars(x)), errors='surrogate_or_strict') for x in args if x is not None]
else:
args = [to_bytes(x, errors='surrogate_or_strict') for x in args if x is not None]
prompt_re = None
if prompt_regex:
if isinstance(prompt_regex, text_type):
if PY3:
prompt_regex = to_bytes(prompt_regex, errors='surrogateescape')
elif PY2:
prompt_regex = to_bytes(prompt_regex, errors='surrogate_or_strict')
try:
prompt_re = re.compile(prompt_regex, re.MULTILINE)
except re.error:
self.fail_json(msg="invalid prompt regular expression given to run_command")
rc = 0
msg = None
st_in = None
# Manipulate the environ we'll send to the new process
old_env_vals = {}
# We can set this from both an attribute and per call
for key, val in self.run_command_environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if environ_update:
for key, val in environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if path_prefix:
old_env_vals['PATH'] = os.environ['PATH']
os.environ['PATH'] = "%s:%s" % (path_prefix, os.environ['PATH'])
# If using test-module.py and explode, the remote lib path will resemble:
# /tmp/test_module_scratch/debug_dir/ansible/module_utils/basic.py
# If using ansible or ansible-playbook with a remote system:
# /tmp/ansible_vmweLQ/ansible_modlib.zip/ansible/module_utils/basic.py
# Clean out python paths set by ansiballz
if 'PYTHONPATH' in os.environ:
pypaths = os.environ['PYTHONPATH'].split(':')
pypaths = [x for x in pypaths
if not x.endswith('/ansible_modlib.zip') and
not x.endswith('/debug_dir')]
os.environ['PYTHONPATH'] = ':'.join(pypaths)
if not os.environ['PYTHONPATH']:
del os.environ['PYTHONPATH']
if data:
st_in = subprocess.PIPE
kwargs = dict(
executable=executable,
shell=shell,
close_fds=close_fds,
stdin=st_in,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=self._restore_signal_handlers,
)
if PY3 and pass_fds:
kwargs["pass_fds"] = pass_fds
elif PY2 and pass_fds:
kwargs['close_fds'] = False
# store the pwd
prev_dir = os.getcwd()
# make sure we're in the right working directory
if cwd and os.path.isdir(cwd):
cwd = to_bytes(os.path.abspath(os.path.expanduser(cwd)), errors='surrogate_or_strict')
kwargs['cwd'] = cwd
try:
os.chdir(cwd)
except (OSError, IOError) as e:
self.fail_json(rc=e.errno, msg="Could not open %s, %s" % (cwd, to_native(e)),
exception=traceback.format_exc())
old_umask = None
if umask:
old_umask = os.umask(umask)
try:
if self._debug:
self.log('Executing: ' + self._clean_args(args))
cmd = subprocess.Popen(args, **kwargs)
if before_communicate_callback:
before_communicate_callback(cmd)
# the communication logic here is essentially taken from that
# of the _communicate() function in ssh.py
stdout = b('')
stderr = b('')
rpipes = [cmd.stdout, cmd.stderr]
if data:
if not binary_data:
data += '\n'
if isinstance(data, text_type):
data = to_bytes(data)
cmd.stdin.write(data)
cmd.stdin.close()
while True:
rfds, wfds, efds = select.select(rpipes, [], rpipes, 1)
stdout += self._read_from_pipes(rpipes, rfds, cmd.stdout)
stderr += self._read_from_pipes(rpipes, rfds, cmd.stderr)
# if we're checking for prompts, do it now
if prompt_re:
if prompt_re.search(stdout) and not data:
if encoding:
stdout = to_native(stdout, encoding=encoding, errors=errors)
return (257, stdout, "A prompt was encountered while running a command, but no input data was specified")
# only break out if no pipes are left to read or
# the pipes are completely read and
# the process is terminated
if (not rpipes or not rfds) and cmd.poll() is not None:
break
# No pipes are left to read but process is not yet terminated
# Only then it is safe to wait for the process to be finished
# NOTE: Actually cmd.poll() is always None here if rpipes is empty
elif not rpipes and cmd.poll() is None:
cmd.wait()
# The process is terminated. Since no pipes to read from are
# left, there is no need to call select() again.
break
cmd.stdout.close()
cmd.stderr.close()
rc = cmd.returncode
except (OSError, IOError) as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(e)))
self.fail_json(rc=e.errno, msg=to_native(e), cmd=self._clean_args(args))
except Exception as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(traceback.format_exc())))
self.fail_json(rc=257, msg=to_native(e), exception=traceback.format_exc(), cmd=self._clean_args(args))
# Restore env settings
for key, val in old_env_vals.items():
if val is None:
del os.environ[key]
else:
os.environ[key] = val
if old_umask:
os.umask(old_umask)
if rc != 0 and check_rc:
msg = heuristic_log_sanitize(stderr.rstrip(), self.no_log_values)
self.fail_json(cmd=self._clean_args(args), rc=rc, stdout=stdout, stderr=stderr, msg=msg)
# reset the pwd
os.chdir(prev_dir)
if encoding is not None:
return (rc, to_native(stdout, encoding=encoding, errors=errors),
to_native(stderr, encoding=encoding, errors=errors))
return (rc, stdout, stderr)
def append_to_file(self, filename, str):
filename = os.path.expandvars(os.path.expanduser(filename))
fh = open(filename, 'a')
fh.write(str)
fh.close()
def bytes_to_human(self, size):
return bytes_to_human(size)
# for backwards compatibility
pretty_bytes = bytes_to_human
def human_to_bytes(self, number, isbits=False):
return human_to_bytes(number, isbits)
#
# Backwards compat
#
# In 2.0, moved from inside the module to the toplevel
is_executable = is_executable
@staticmethod
def get_buffer_size(fd):
try:
# 1032 == FZ_GETPIPE_SZ
buffer_size = fcntl.fcntl(fd, 1032)
except Exception:
try:
# not as exact as above, but should be good enough for most platforms that fail the previous call
buffer_size = select.PIPE_BUF
except Exception:
buffer_size = 9000 # use sane default JIC
return buffer_size
def get_module_path():
return os.path.dirname(os.path.realpath(__file__))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,468 |
AIX tests temporarily disabled
|
##### SUMMARY
AIX tests have been temporarily disabled in https://github.com/ansible/ansible/commit/cc4c38ef7c053a03e08db272bed98ab9e2c0be99 due to provisioning failures.
##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
shippable.yml
##### ANSIBLE VERSION
devel
##### CONFIGURATION
Shippable
##### OS / ENVIRONMENT
Shippable
##### STEPS TO REPRODUCE
Run AIX tests.
##### EXPECTED RESULTS
Tests pass.
##### ACTUAL RESULTS
Provisioning fails. Example:
https://app.shippable.com/github/ansible/ansible/runs/166171/94/console
|
https://github.com/ansible/ansible/issues/69468
|
https://github.com/ansible/ansible/pull/69469
|
cc4c38ef7c053a03e08db272bed98ab9e2c0be99
|
cdaf7da11a2cdffe7c9bd5cff7d1b2acfa8e95e1
| 2020-05-12T20:54:54Z |
python
| 2020-05-12T22:05:47Z |
shippable.yml
|
language: python
env:
matrix:
- T=none
matrix:
exclude:
- env: T=none
include:
- env: T=sanity/1
- env: T=sanity/2
- env: T=sanity/3
- env: T=sanity/4
- env: T=sanity/5
- env: T=units/2.6
- env: T=units/2.7
- env: T=units/3.5
- env: T=units/3.6
- env: T=units/3.7
- env: T=units/3.8
- env: T=units/3.9
- env: T=windows/2012/1
- env: T=windows/2012-R2/1
- env: T=windows/2016/1
- env: T=windows/2019/1
- env: T=osx/10.11/1
- env: T=rhel/7.8/1
- env: T=rhel/8.2/1
- env: T=freebsd/11.1/1
- env: T=freebsd/12.1/1
- env: T=linux/centos6/1
- env: T=linux/centos7/1
- env: T=linux/centos8/1
- env: T=linux/fedora31/1
- env: T=linux/fedora32/1
- env: T=linux/opensuse15py2/1
- env: T=linux/opensuse15/1
- env: T=linux/ubuntu1604/1
- env: T=linux/ubuntu1804/1
- env: T=osx/10.11/2
- env: T=rhel/7.8/2
- env: T=rhel/8.2/2
- env: T=freebsd/11.1/2
- env: T=freebsd/12.1/2
- env: T=linux/centos6/2
- env: T=linux/centos7/2
- env: T=linux/centos8/2
- env: T=linux/fedora31/2
- env: T=linux/fedora32/2
- env: T=linux/opensuse15py2/2
- env: T=linux/opensuse15/2
- env: T=linux/ubuntu1604/2
- env: T=linux/ubuntu1804/2
- env: T=osx/10.11/3
- env: T=rhel/7.8/3
- env: T=rhel/8.2/3
- env: T=freebsd/11.1/3
- env: T=freebsd/12.1/3
- env: T=linux/centos6/3
- env: T=linux/centos7/3
- env: T=linux/centos8/3
- env: T=linux/fedora31/3
- env: T=linux/fedora32/3
- env: T=linux/opensuse15py2/3
- env: T=linux/opensuse15/3
- env: T=linux/ubuntu1604/3
- env: T=linux/ubuntu1804/3
- env: T=osx/10.11/4
- env: T=rhel/7.8/4
- env: T=rhel/8.2/4
- env: T=freebsd/11.1/4
- env: T=freebsd/12.1/4
- env: T=linux/centos6/4
- env: T=linux/centos7/4
- env: T=linux/centos8/4
- env: T=linux/fedora31/4
- env: T=linux/fedora32/4
- env: T=linux/opensuse15py2/4
- env: T=linux/opensuse15/4
- env: T=linux/ubuntu1604/4
- env: T=linux/ubuntu1804/4
- env: T=osx/10.11/5
- env: T=rhel/7.8/5
- env: T=rhel/8.2/5
- env: T=freebsd/11.1/5
- env: T=freebsd/12.1/5
- env: T=linux/centos6/5
- env: T=linux/centos7/5
- env: T=linux/centos8/5
- env: T=linux/fedora31/5
- env: T=linux/fedora32/5
- env: T=linux/opensuse15py2/5
- env: T=linux/opensuse15/5
- env: T=linux/ubuntu1604/5
- env: T=linux/ubuntu1804/5
- env: T=fallaxy/2.7/1
- env: T=fallaxy/3.6/1
- env: T=i/osx/10.11
- env: T=i/rhel/7.8
- env: T=i/rhel/8.2
- env: T=i/freebsd/11.1
- env: T=i/freebsd/12.1
- env: T=i/linux/centos6
- env: T=i/linux/centos7
- env: T=i/linux/centos8
- env: T=i/linux/fedora31
- env: T=i/linux/fedora32
- env: T=i/linux/opensuse15py2
- env: T=i/linux/opensuse15
- env: T=i/linux/ubuntu1604
- env: T=i/linux/ubuntu1804
- env: T=i/windows/2012
- env: T=i/windows/2012-R2
- env: T=i/windows/2016
- env: T=i/windows/2019
- env: T=i/ios/csr1000v//1
- env: T=i/vyos/1.1.8/2.7/1
- env: T=i/vyos/1.1.8/3.6/1
- env: T=i/aws/2.7/1
- env: T=i/aws/3.6/1
- env: T=i/azure/2.7/1
- env: T=i/azure/3.6/1
- env: T=i/vcenter//1
- env: T=i/cs//1
- env: T=i/tower//1
- env: T=i/cloud//1
- env: T=i/hcloud//1
branches:
except:
- "*-patch-*"
- "revert-*-*"
build:
ci:
- test/utils/shippable/timing.sh test/utils/shippable/shippable.sh $T
integrations:
notifications:
- integrationName: email
type: email
on_success: never
on_failure: never
on_start: never
on_pull_request: never
- integrationName: irc
type: irc
recipients:
- "chat.freenode.net#ansible-notices"
on_success: change
on_failure: always
on_start: never
on_pull_request: always
- integrationName: slack
type: slack
recipients:
- "#shippable"
on_success: change
on_failure: always
on_start: never
on_pull_request: never
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,889 |
group contains deprecated call to be removed in 2.10
|
##### SUMMARY
group contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
b/ansible/inventory/group.py:54:20: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
b/ansible/inventory/group.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61889
|
https://github.com/ansible/ansible/pull/66650
|
cdaf7da11a2cdffe7c9bd5cff7d1b2acfa8e95e1
|
6086ea62ee5e47f3071410b302a10392d6e2437a
| 2019-09-05T20:41:10Z |
python
| 2020-05-13T14:16:32Z |
changelogs/fragments/61889-change-transform_invalid_group_chars-default.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,889 |
group contains deprecated call to be removed in 2.10
|
##### SUMMARY
group contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
b/ansible/inventory/group.py:54:20: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
b/ansible/inventory/group.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61889
|
https://github.com/ansible/ansible/pull/66650
|
cdaf7da11a2cdffe7c9bd5cff7d1b2acfa8e95e1
|
6086ea62ee5e47f3071410b302a10392d6e2437a
| 2019-09-05T20:41:10Z |
python
| 2020-05-13T14:16:32Z |
lib/ansible/inventory/group.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from itertools import chain
from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.common._collections_compat import Mapping, MutableMapping
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars
display = Display()
def to_safe_group_name(name, replacer="_", force=False, silent=False):
# Converts 'bad' characters in a string to underscores (or provided replacer) so they can be used as Ansible hosts or groups
warn = ''
if name: # when deserializing we might not have name yet
invalid_chars = C.INVALID_VARIABLE_NAMES.findall(name)
if invalid_chars:
msg = 'invalid character(s) "%s" in group name (%s)' % (to_text(set(invalid_chars)), to_text(name))
if C.TRANSFORM_INVALID_GROUP_CHARS not in ('never', 'ignore') or force:
name = C.INVALID_VARIABLE_NAMES.sub(replacer, name)
if not (silent or C.TRANSFORM_INVALID_GROUP_CHARS == 'silently'):
display.vvvv('Replacing ' + msg)
warn = 'Invalid characters were found in group names and automatically replaced, use -vvvv to see details'
else:
if C.TRANSFORM_INVALID_GROUP_CHARS == 'never':
display.vvvv('Not replacing %s' % msg)
warn = True
warn = 'Invalid characters were found in group names but not replaced, use -vvvv to see details'
# remove this message after 2.10 AND changing the default to 'always'
group_chars_setting, group_chars_origin = C.config.get_config_value_and_origin('TRANSFORM_INVALID_GROUP_CHARS')
if group_chars_origin == 'default':
display.deprecated('The TRANSFORM_INVALID_GROUP_CHARS settings is set to allow bad characters in group names by default,'
' this will change, but still be user configurable on deprecation', version='2.10')
if warn:
display.warning(warn)
return name
class Group:
''' a group of ansible hosts '''
# __slots__ = [ 'name', 'hosts', 'vars', 'child_groups', 'parent_groups', 'depth', '_hosts_cache' ]
def __init__(self, name=None):
self.depth = 0
self.name = to_safe_group_name(name)
self.hosts = []
self._hosts = None
self.vars = {}
self.child_groups = []
self.parent_groups = []
self._hosts_cache = None
self.priority = 1
def __repr__(self):
return self.get_name()
def __str__(self):
return self.get_name()
def __getstate__(self):
return self.serialize()
def __setstate__(self, data):
return self.deserialize(data)
def serialize(self):
parent_groups = []
for parent in self.parent_groups:
parent_groups.append(parent.serialize())
self._hosts = None
result = dict(
name=self.name,
vars=self.vars.copy(),
parent_groups=parent_groups,
depth=self.depth,
hosts=self.hosts,
)
return result
def deserialize(self, data):
self.__init__()
self.name = data.get('name')
self.vars = data.get('vars', dict())
self.depth = data.get('depth', 0)
self.hosts = data.get('hosts', [])
self._hosts = None
parent_groups = data.get('parent_groups', [])
for parent_data in parent_groups:
g = Group()
g.deserialize(parent_data)
self.parent_groups.append(g)
def _walk_relationship(self, rel, include_self=False, preserve_ordering=False):
'''
Given `rel` that is an iterable property of Group,
consitituting a directed acyclic graph among all groups,
Returns a set of all groups in full tree
A B C
| / | /
| / | /
D -> E
| / vertical connections
| / are directed upward
F
Called on F, returns set of (A, B, C, D, E)
'''
seen = set([])
unprocessed = set(getattr(self, rel))
if include_self:
unprocessed.add(self)
if preserve_ordering:
ordered = [self] if include_self else []
ordered.extend(getattr(self, rel))
while unprocessed:
seen.update(unprocessed)
new_unprocessed = set([])
for new_item in chain.from_iterable(getattr(g, rel) for g in unprocessed):
new_unprocessed.add(new_item)
if preserve_ordering:
if new_item not in seen:
ordered.append(new_item)
new_unprocessed.difference_update(seen)
unprocessed = new_unprocessed
if preserve_ordering:
return ordered
return seen
def get_ancestors(self):
return self._walk_relationship('parent_groups')
def get_descendants(self, **kwargs):
return self._walk_relationship('child_groups', **kwargs)
@property
def host_names(self):
if self._hosts is None:
self._hosts = set(self.hosts)
return self._hosts
def get_name(self):
return self.name
def add_child_group(self, group):
if self == group:
raise Exception("can't add group to itself")
# don't add if it's already there
if group not in self.child_groups:
# prepare list of group's new ancestors this edge creates
start_ancestors = group.get_ancestors()
new_ancestors = self.get_ancestors()
if group in new_ancestors:
raise AnsibleError("Adding group '%s' as child to '%s' creates a recursive dependency loop." % (to_native(group.name), to_native(self.name)))
new_ancestors.add(self)
new_ancestors.difference_update(start_ancestors)
self.child_groups.append(group)
# update the depth of the child
group.depth = max([self.depth + 1, group.depth])
# update the depth of the grandchildren
group._check_children_depth()
# now add self to child's parent_groups list, but only if there
# isn't already a group with the same name
if self.name not in [g.name for g in group.parent_groups]:
group.parent_groups.append(self)
for h in group.get_hosts():
h.populate_ancestors(additions=new_ancestors)
self.clear_hosts_cache()
def _check_children_depth(self):
depth = self.depth
start_depth = self.depth # self.depth could change over loop
seen = set([])
unprocessed = set(self.child_groups)
while unprocessed:
seen.update(unprocessed)
depth += 1
to_process = unprocessed.copy()
unprocessed = set([])
for g in to_process:
if g.depth < depth:
g.depth = depth
unprocessed.update(g.child_groups)
if depth - start_depth > len(seen):
raise AnsibleError("The group named '%s' has a recursive dependency loop." % to_native(self.name))
def add_host(self, host):
if host.name not in self.host_names:
self.hosts.append(host)
self._hosts.add(host.name)
host.add_group(self)
self.clear_hosts_cache()
def remove_host(self, host):
if host.name in self.host_names:
self.hosts.remove(host)
self._hosts.remove(host.name)
host.remove_group(self)
self.clear_hosts_cache()
def set_variable(self, key, value):
if key == 'ansible_group_priority':
self.set_priority(int(value))
else:
if key in self.vars and isinstance(self.vars[key], MutableMapping) and isinstance(value, Mapping):
self.vars[key] = combine_vars(self.vars[key], value)
else:
self.vars[key] = value
def clear_hosts_cache(self):
self._hosts_cache = None
for g in self.get_ancestors():
g._hosts_cache = None
def get_hosts(self):
if self._hosts_cache is None:
self._hosts_cache = self._get_hosts()
return self._hosts_cache
def _get_hosts(self):
hosts = []
seen = {}
for kid in self.get_descendants(include_self=True, preserve_ordering=True):
kid_hosts = kid.hosts
for kk in kid_hosts:
if kk not in seen:
seen[kk] = 1
if self.name == 'all' and kk.implicit:
continue
hosts.append(kk)
return hosts
def get_vars(self):
return self.vars.copy()
def set_priority(self, priority):
try:
self.priority = int(priority)
except TypeError:
# FIXME: warn about invalid priority
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,401 |
Ansible crashes when callback_whitelist contains collection callback that does not exist
|
##### SUMMARY
If `callback_whitelist` contains a FQCN that does not exist, Ansible will crash:
```
ERROR! Unexpected Exception, this is probably a bug: 'NoneType' object has no attribute 'set_options'
the full traceback was:
Traceback (most recent call last):
File "ansible/bin/ansible-playbook", line 123, in <module>
exit_code = cli.run()
File "ansible/lib/ansible/cli/playbook.py", line 127, in run
results = pbex.run()
File "ansible/lib/ansible/executor/playbook_executor.py", line 99, in run
self._tqm.load_callbacks()
File "ansible/lib/ansible/executor/task_queue_manager.py", line 165, in load_callbacks
callback_obj.set_options()
AttributeError: 'NoneType' object has no attribute 'set_options'
```
This happens since #66128 (unsurprisingly). This is easy to fix by changing line 165 to
```
if callback_obj is not None:
callback_obj.set_options()
```
but this only fixes the symptom (crash), not the cause (no handling when callback not found).
I guess a better behavior would be to:
```
if callback_obj is None:
raise AnsibleError("Cannot find callback: %s" % callback_plugin_name)
```
or at least print a warning. Since I'm not sure what the correct behavior is, I'm creating this issue.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/executor/task_queue_manager.py
##### ANSIBLE VERSION
```paste below
2.9
devel
```
|
https://github.com/ansible/ansible/issues/69401
|
https://github.com/ansible/ansible/pull/69440
|
eb3e4b3a7b8dc39f90264ab6b40c72a48cc0fd59
|
0aa76503dc706340d85f4d3f19f472880187eb14
| 2020-05-09T07:36:27Z |
python
| 2020-05-13T16:02:31Z |
changelogs/fragments/no_fatal_bad_cb.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,401 |
Ansible crashes when callback_whitelist contains collection callback that does not exist
|
##### SUMMARY
If `callback_whitelist` contains a FQCN that does not exist, Ansible will crash:
```
ERROR! Unexpected Exception, this is probably a bug: 'NoneType' object has no attribute 'set_options'
the full traceback was:
Traceback (most recent call last):
File "ansible/bin/ansible-playbook", line 123, in <module>
exit_code = cli.run()
File "ansible/lib/ansible/cli/playbook.py", line 127, in run
results = pbex.run()
File "ansible/lib/ansible/executor/playbook_executor.py", line 99, in run
self._tqm.load_callbacks()
File "ansible/lib/ansible/executor/task_queue_manager.py", line 165, in load_callbacks
callback_obj.set_options()
AttributeError: 'NoneType' object has no attribute 'set_options'
```
This happens since #66128 (unsurprisingly). This is easy to fix by changing line 165 to
```
if callback_obj is not None:
callback_obj.set_options()
```
but this only fixes the symptom (crash), not the cause (no handling when callback not found).
I guess a better behavior would be to:
```
if callback_obj is None:
raise AnsibleError("Cannot find callback: %s" % callback_plugin_name)
```
or at least print a warning. Since I'm not sure what the correct behavior is, I'm creating this issue.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/executor/task_queue_manager.py
##### ANSIBLE VERSION
```paste below
2.9
devel
```
|
https://github.com/ansible/ansible/issues/69401
|
https://github.com/ansible/ansible/pull/69440
|
eb3e4b3a7b8dc39f90264ab6b40c72a48cc0fd59
|
0aa76503dc706340d85f4d3f19f472880187eb14
| 2020-05-09T07:36:27Z |
python
| 2020-05-13T16:02:31Z |
lib/ansible/executor/task_queue_manager.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import tempfile
import time
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError
from ansible.executor.play_iterator import PlayIterator
from ansible.executor.stats import AggregateStats
from ansible.executor.task_result import TaskResult
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_text, to_native
from ansible.playbook.block import Block
from ansible.playbook.play_context import PlayContext
from ansible.plugins.loader import callback_loader, strategy_loader, module_loader
from ansible.plugins.callback import CallbackBase
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.helpers import pct_to_int
from ansible.vars.hostvars import HostVars
from ansible.vars.reserved import warn_if_reserved
from ansible.utils.display import Display
from ansible.utils.multiprocessing import context as multiprocessing_context
__all__ = ['TaskQueueManager']
display = Display()
class TaskQueueManager:
'''
This class handles the multiprocessing requirements of Ansible by
creating a pool of worker forks, a result handler fork, and a
manager object with shared datastructures/queues for coordinating
work between all processes.
The queue manager is responsible for loading the play strategy plugin,
which dispatches the Play's tasks to hosts.
'''
RUN_OK = 0
RUN_ERROR = 1
RUN_FAILED_HOSTS = 2
RUN_UNREACHABLE_HOSTS = 4
RUN_FAILED_BREAK_PLAY = 8
RUN_UNKNOWN_ERROR = 255
def __init__(self, inventory, variable_manager, loader, passwords, stdout_callback=None, run_additional_callbacks=True, run_tree=False, forks=None):
self._inventory = inventory
self._variable_manager = variable_manager
self._loader = loader
self._stats = AggregateStats()
self.passwords = passwords
self._stdout_callback = stdout_callback
self._run_additional_callbacks = run_additional_callbacks
self._run_tree = run_tree
self._forks = forks or 5
self._callbacks_loaded = False
self._callback_plugins = []
self._start_at_done = False
# make sure any module paths (if specified) are added to the module_loader
if context.CLIARGS.get('module_path', False):
for path in context.CLIARGS['module_path']:
if path:
module_loader.add_directory(path)
# a special flag to help us exit cleanly
self._terminated = False
# dictionaries to keep track of failed/unreachable hosts
self._failed_hosts = dict()
self._unreachable_hosts = dict()
try:
self._final_q = multiprocessing_context.Queue()
except OSError as e:
raise AnsibleError("Unable to use multiprocessing, this is normally caused by lack of access to /dev/shm: %s" % to_native(e))
# A temporary file (opened pre-fork) used by connection
# plugins for inter-process locking.
self._connection_lockfile = tempfile.TemporaryFile()
def _initialize_processes(self, num):
self._workers = []
for i in range(num):
self._workers.append(None)
def load_callbacks(self):
'''
Loads all available callbacks, with the exception of those which
utilize the CALLBACK_TYPE option. When CALLBACK_TYPE is set to 'stdout',
only one such callback plugin will be loaded.
'''
if self._callbacks_loaded:
return
stdout_callback_loaded = False
if self._stdout_callback is None:
self._stdout_callback = C.DEFAULT_STDOUT_CALLBACK
if isinstance(self._stdout_callback, CallbackBase):
stdout_callback_loaded = True
elif isinstance(self._stdout_callback, string_types):
if self._stdout_callback not in callback_loader:
raise AnsibleError("Invalid callback for stdout specified: %s" % self._stdout_callback)
else:
self._stdout_callback = callback_loader.get(self._stdout_callback)
self._stdout_callback.set_options()
stdout_callback_loaded = True
else:
raise AnsibleError("callback must be an instance of CallbackBase or the name of a callback plugin")
for callback_plugin in callback_loader.all(class_only=True):
callback_type = getattr(callback_plugin, 'CALLBACK_TYPE', '')
callback_needs_whitelist = getattr(callback_plugin, 'CALLBACK_NEEDS_WHITELIST', False)
(callback_name, _) = os.path.splitext(os.path.basename(callback_plugin._original_path))
if callback_type == 'stdout':
# we only allow one callback of type 'stdout' to be loaded,
if callback_name != self._stdout_callback or stdout_callback_loaded:
continue
stdout_callback_loaded = True
elif callback_name == 'tree' and self._run_tree:
# special case for ansible cli option
pass
elif not self._run_additional_callbacks or (callback_needs_whitelist and (
C.DEFAULT_CALLBACK_WHITELIST is None or callback_name not in C.DEFAULT_CALLBACK_WHITELIST)):
# 2.x plugins shipped with ansible should require whitelisting, older or non shipped should load automatically
continue
callback_obj = callback_plugin()
callback_obj.set_options()
self._callback_plugins.append(callback_obj)
for callback_plugin_name in (c for c in C.DEFAULT_CALLBACK_WHITELIST if AnsibleCollectionRef.is_valid_fqcr(c)):
# TODO: need to extend/duplicate the stdout callback check here (and possible move this ahead of the old way
callback_obj = callback_loader.get(callback_plugin_name)
callback_obj.set_options()
self._callback_plugins.append(callback_obj)
self._callbacks_loaded = True
def run(self, play):
'''
Iterates over the roles/tasks in a play, using the given (or default)
strategy for queueing tasks. The default is the linear strategy, which
operates like classic Ansible by keeping all hosts in lock-step with
a given task (meaning no hosts move on to the next task until all hosts
are done with the current task).
'''
if not self._callbacks_loaded:
self.load_callbacks()
all_vars = self._variable_manager.get_vars(play=play)
warn_if_reserved(all_vars)
templar = Templar(loader=self._loader, variables=all_vars)
new_play = play.copy()
new_play.post_validate(templar)
new_play.handlers = new_play.compile_roles_handlers() + new_play.handlers
self.hostvars = HostVars(
inventory=self._inventory,
variable_manager=self._variable_manager,
loader=self._loader,
)
play_context = PlayContext(new_play, self.passwords, self._connection_lockfile.fileno())
if (self._stdout_callback and
hasattr(self._stdout_callback, 'set_play_context')):
self._stdout_callback.set_play_context(play_context)
for callback_plugin in self._callback_plugins:
if hasattr(callback_plugin, 'set_play_context'):
callback_plugin.set_play_context(play_context)
self.send_callback('v2_playbook_on_play_start', new_play)
# build the iterator
iterator = PlayIterator(
inventory=self._inventory,
play=new_play,
play_context=play_context,
variable_manager=self._variable_manager,
all_vars=all_vars,
start_at_done=self._start_at_done,
)
# adjust to # of workers to configured forks or size of batch, whatever is lower
self._initialize_processes(min(self._forks, iterator.batch_size))
# load the specified strategy (or the default linear one)
strategy = strategy_loader.get(new_play.strategy, self)
if strategy is None:
raise AnsibleError("Invalid play strategy specified: %s" % new_play.strategy, obj=play._ds)
# Because the TQM may survive multiple play runs, we start by marking
# any hosts as failed in the iterator here which may have been marked
# as failed in previous runs. Then we clear the internal list of failed
# hosts so we know what failed this round.
for host_name in self._failed_hosts.keys():
host = self._inventory.get_host(host_name)
iterator.mark_host_failed(host)
self.clear_failed_hosts()
# during initialization, the PlayContext will clear the start_at_task
# field to signal that a matching task was found, so check that here
# and remember it so we don't try to skip tasks on future plays
if context.CLIARGS.get('start_at_task') is not None and play_context.start_at_task is None:
self._start_at_done = True
# and run the play using the strategy and cleanup on way out
play_return = strategy.run(iterator, play_context)
# now re-save the hosts that failed from the iterator to our internal list
for host_name in iterator.get_failed_hosts():
self._failed_hosts[host_name] = True
strategy.cleanup()
self._cleanup_processes()
return play_return
def cleanup(self):
display.debug("RUNNING CLEANUP")
self.terminate()
self._final_q.close()
self._cleanup_processes()
def _cleanup_processes(self):
if hasattr(self, '_workers'):
for attempts_remaining in range(C.WORKER_SHUTDOWN_POLL_COUNT - 1, -1, -1):
if not any(worker_prc and worker_prc.is_alive() for worker_prc in self._workers):
break
if attempts_remaining:
time.sleep(C.WORKER_SHUTDOWN_POLL_DELAY)
else:
display.warning('One or more worker processes are still running and will be terminated.')
for worker_prc in self._workers:
if worker_prc and worker_prc.is_alive():
try:
worker_prc.terminate()
except AttributeError:
pass
def clear_failed_hosts(self):
self._failed_hosts = dict()
def get_inventory(self):
return self._inventory
def get_variable_manager(self):
return self._variable_manager
def get_loader(self):
return self._loader
def get_workers(self):
return self._workers[:]
def terminate(self):
self._terminated = True
def has_dead_workers(self):
# [<WorkerProcess(WorkerProcess-2, stopped[SIGKILL])>,
# <WorkerProcess(WorkerProcess-2, stopped[SIGTERM])>
defunct = False
for x in self._workers:
if getattr(x, 'exitcode', None):
defunct = True
return defunct
def send_callback(self, method_name, *args, **kwargs):
for callback_plugin in [self._stdout_callback] + self._callback_plugins:
# a plugin that set self.disabled to True will not be called
# see osx_say.py example for such a plugin
if getattr(callback_plugin, 'disabled', False):
continue
# try to find v2 method, fallback to v1 method, ignore callback if no method found
methods = []
for possible in [method_name, 'v2_on_any']:
gotit = getattr(callback_plugin, possible, None)
if gotit is None:
gotit = getattr(callback_plugin, possible.replace('v2_', ''), None)
if gotit is not None:
methods.append(gotit)
# send clean copies
new_args = []
for arg in args:
# FIXME: add play/task cleaners
if isinstance(arg, TaskResult):
new_args.append(arg.clean_copy())
# elif isinstance(arg, Play):
# elif isinstance(arg, Task):
else:
new_args.append(arg)
for method in methods:
try:
method(*new_args, **kwargs)
except Exception as e:
# TODO: add config toggle to make this fatal or not?
display.warning(u"Failure using method (%s) in callback plugin (%s): %s" % (to_text(method_name), to_text(callback_plugin), to_text(e)))
from traceback import format_tb
from sys import exc_info
display.vvv('Callback Exception: \n' + ' '.join(format_tb(exc_info()[2])))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,401 |
Ansible crashes when callback_whitelist contains collection callback that does not exist
|
##### SUMMARY
If `callback_whitelist` contains a FQCN that does not exist, Ansible will crash:
```
ERROR! Unexpected Exception, this is probably a bug: 'NoneType' object has no attribute 'set_options'
the full traceback was:
Traceback (most recent call last):
File "ansible/bin/ansible-playbook", line 123, in <module>
exit_code = cli.run()
File "ansible/lib/ansible/cli/playbook.py", line 127, in run
results = pbex.run()
File "ansible/lib/ansible/executor/playbook_executor.py", line 99, in run
self._tqm.load_callbacks()
File "ansible/lib/ansible/executor/task_queue_manager.py", line 165, in load_callbacks
callback_obj.set_options()
AttributeError: 'NoneType' object has no attribute 'set_options'
```
This happens since #66128 (unsurprisingly). This is easy to fix by changing line 165 to
```
if callback_obj is not None:
callback_obj.set_options()
```
but this only fixes the symptom (crash), not the cause (no handling when callback not found).
I guess a better behavior would be to:
```
if callback_obj is None:
raise AnsibleError("Cannot find callback: %s" % callback_plugin_name)
```
or at least print a warning. Since I'm not sure what the correct behavior is, I'm creating this issue.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/executor/task_queue_manager.py
##### ANSIBLE VERSION
```paste below
2.9
devel
```
|
https://github.com/ansible/ansible/issues/69401
|
https://github.com/ansible/ansible/pull/69440
|
eb3e4b3a7b8dc39f90264ab6b40c72a48cc0fd59
|
0aa76503dc706340d85f4d3f19f472880187eb14
| 2020-05-09T07:36:27Z |
python
| 2020-05-13T16:02:31Z |
test/integration/targets/collections/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_COLLECTIONS_PATHS=$PWD/collection_root_user:$PWD/collection_root_sys
export ANSIBLE_GATHERING=explicit
export ANSIBLE_GATHER_SUBSET=minimal
export ANSIBLE_HOST_PATTERN_MISMATCH=error
# FUTURE: just use INVENTORY_PATH as-is once ansible-test sets the right dir
ipath=../../$(basename "${INVENTORY_PATH}")
export INVENTORY_PATH="$ipath"
# test callback
ANSIBLE_CALLBACK_WHITELIST=testns.testcoll.usercallback ansible localhost -m ping | grep "usercallback says ok"
# test documentation
ansible-doc testns.testcoll.testmodule -vvv | grep -- "- normal_doc_frag"
# test adhoc default collection resolution (use unqualified collection module with playbook dir under its collection)
echo "testing adhoc default collection support with explicit playbook dir"
ANSIBLE_PLAYBOOK_DIR=./collection_root_user/ansible_collections/testns/testcoll ansible localhost -m testmodule
echo "testing bad doc_fragments (expected ERROR message follows)"
# test documentation failure
ansible-doc testns.testcoll.testmodule_bad_docfrags -vvv 2>&1 | grep -- "unknown doc_fragment"
# we need multiple plays, and conditional import_playbook is noisy and causes problems, so choose here which one to use...
if [[ ${INVENTORY_PATH} == *.winrm ]]; then
export TEST_PLAYBOOK=windows.yml
else
export TEST_PLAYBOOK=posix.yml
echo "testing default collection support"
ansible-playbook -i "${INVENTORY_PATH}" collection_root_user/ansible_collections/testns/testcoll/playbooks/default_collection_playbook.yml
fi
# run test playbooks
ansible-playbook -i "${INVENTORY_PATH}" -i ./a.statichost.yml -v "${TEST_PLAYBOOK}" "$@"
if [[ ${INVENTORY_PATH} != *.winrm ]]; then
ansible-playbook -i "${INVENTORY_PATH}" -i ./a.statichost.yml -v invocation_tests.yml "$@"
fi
# test adjacent with --playbook-dir
export ANSIBLE_COLLECTIONS_PATHS=''
ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED=1 ansible-inventory -i a.statichost.yml --list --export --playbook-dir=. -v "$@"
# use an inventory source with caching enabled
ansible-playbook -i a.statichost.yml -i ./cache.statichost.yml -v check_populated_inventory.yml
# Check that the inventory source with caching enabled was stored
if [[ "$(find ./inventory_cache -type f ! -path "./inventory_cache/.keep" | wc -l)" -ne "1" ]]; then
echo "Failed to find the expected single cache"
exit 1
fi
CACHEFILE="$(find ./inventory_cache -type f ! -path './inventory_cache/.keep')"
# Check the cache for the expected hosts
if [[ "$(grep -wc "cache_host_a" "$CACHEFILE")" -ne "1" ]]; then
echo "Failed to cache host as expected"
exit 1
fi
if [[ "$(grep -wc "dynamic_host_a" "$CACHEFILE")" -ne "0" ]]; then
echo "Cached an incorrect source"
exit 1
fi
./vars_plugin_tests.sh
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,490 |
ansible-galaxy collection list should examine galaxy.yml for information
|
##### SUMMARY
The list subcommand throws a lot of warnings about a missing version if the collection was not built and installed via an artifact. This information can often be obtained in the galaxy.yml though, so it would make sense to fallback to that ...
```
(ansible_base) [vagrant@centos8 ~]$ ansible-galaxy collection list
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/azure/azcollection' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cyberark/bizdev' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/google/cloud' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/netbox_community/ansible_modules' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/splunk/enterprise_security' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/ansible/netcommon' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/check_point/mgmt' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/f5networks/f5_modules' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/ibm/qradar' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/openstack/cloud' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/vyos/vyos' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/arista/eos' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/iosxr' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/ucs' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/aci' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/meraki' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/intersight' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/mso' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/ios' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/nxos' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/fortinet/fortios' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/junipernetworks/junos' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/purestorage/flasharray' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/purestorage/flashblade' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/awx/awx' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/community/grafana' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/community/kubernetes' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/community/amazon' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/community/vmware' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/community/general' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/gavinfish/azuretest' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/netapp/aws' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/netapp/elementsw' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/netapp/ontap' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/servicenow/servicenow' does not have a MANIFEST.json file, cannot detect version.
# /home/vagrant/.ansible/collections/ansible_collections
Collection Version
-------------------------------- -------
ansible.netcommon *
arista.eos *
awx.awx *
azure.azcollection *
check_point.mgmt *
cisco.aci *
cisco.intersight *
cisco.ios *
cisco.iosxr *
cisco.meraki *
cisco.mso *
cisco.nxos *
cisco.ucs *
community.amazon *
community.general *
community.grafana *
community.kubernetes *
community.vmware *
cyberark.bizdev *
f5networks.f5_modules *
fortinet.fortios *
gavinfish.azuretest *
google.cloud *
ibm.qradar *
junipernetworks.junos *
netapp.aws *
netapp.elementsw *
netapp.ontap *
netbox_community.ansible_modules *
openstack.cloud *
purestorage.flasharray *
purestorage.flashblade *
servicenow.servicenow *
splunk.enterprise_security *
vyos.vyos *
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy collection list
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
|
https://github.com/ansible/ansible/issues/67490
|
https://github.com/ansible/ansible/pull/68925
|
343ffaa18b63c92e182b16c3ad84b8d81ca4df69
|
55e29a1464fef700671096dd99bcae89e574ff2f
| 2020-02-17T19:27:40Z |
python
| 2020-05-14T16:28:08Z |
docs/docsite/rst/user_guide/collections_using.rst
|
.. _collections:
*****************
Using collections
*****************
Collections are a distribution format for Ansible content that can include playbooks, roles, modules, and plugins.
You can install and use collections through `Ansible Galaxy <https://galaxy.ansible.com>`_.
* For details on how to *develop* collections see :ref:`developing_collections`.
* For the current development status of Collections and FAQ see `Ansible Collections Community Guide <https://github.com/ansible-collections/general/blob/master/README.rst>`_.
.. contents::
:local:
:depth: 2
.. _collections_installing:
Installing collections
======================
Installing collections with ``ansible-galaxy``
----------------------------------------------
.. include:: ../shared_snippets/installing_collections.txt
.. _collections_older_version:
Installing an older version of a collection
-------------------------------------------
.. include:: ../shared_snippets/installing_older_collection.txt
.. _collection_requirements_file:
Install multiple collections with a requirements file
-----------------------------------------------------
.. include:: ../shared_snippets/installing_multiple_collections.txt
.. _collection_offline_download:
Downloading a collection for offline use
-----------------------------------------
.. include:: ../shared_snippets/download_tarball_collections.txt
.. _galaxy_server_config:
Configuring the ``ansible-galaxy`` client
------------------------------------------
.. include:: ../shared_snippets/galaxy_server_list.txt
.. _collections_downloading:
Downloading collections
=======================
To download a collection and its dependencies for an offline install, run ``ansible-galaxy collection download``. This
downloads the collections specified and their dependencies to the specified folder and creates a ``requirements.yml``
file which can be used to install those collections on a host without access to a Galaxy server. All the collections
are downloaded by default to the ``./collections`` folder.
Just like the ``install`` command, the collections are sourced based on the
:ref:`configured galaxy server config <galaxy_server_config>`. Even if a collection to download was specified by a URL
or path to a tarball, the collection will be redownloaded from the configured Galaxy server.
Collections can be specified as one or multiple collections or with a ``requirements.yml`` file just like
``ansible-galaxy collection install``.
To download a single collection and its dependencies:
.. code-block:: bash
ansible-galaxy collection download my_namespace.my_collection
To download a single collection at a specific version:
.. code-block:: bash
ansible-galaxy collection download my_namespace.my_collection:1.0.0
To download multiple collections either specify multiple collections as command line arguments as shown above or use a
requirements file in the format documented with :ref:`collection_requirements_file`.
.. code-block:: bash
ansible-galaxy collection download -r requirements.yml
All the collections are downloaded by default to the ``./collections`` folder but you can use ``-p`` or
``--download-path`` to specify another path:
.. code-block:: bash
ansible-galaxy collection download my_namespace.my_collection -p ~/offline-collections
Once you have downloaded the collections, the folder contains the collections specified, their dependencies, and a
``requirements.yml`` file. You can use this folder as is with ``ansible-galaxy collection install`` to install the
collections on a host without access to a Galaxy or Automation Hub server.
.. code-block:: bash
# This must be run from the folder that contains the offline collections and requirements.yml file downloaded
# by the internet-connected host
cd ~/offline-collections
ansible-galaxy collection install -r requirements.yml
.. _collections_listing:
Listing collections
===================
To list installed collections, run ``ansible-galaxy collection list``. This shows all of the installed collections found in the configured collections search paths. The path where the collections are located are displayed as well as version information. If no version information is available, a ``*`` is displayed for the version number.
.. code-block:: shell
# /home/astark/.ansible/collections/ansible_collections
Collection Version
-------------------------- -------
cisco.aci 0.0.5
cisco.mso 0.0.4
sandwiches.ham *
splunk.enterprise_security 0.0.5
# /usr/share/ansible/collections/ansible_collections
Collection Version
----------------- -------
fortinet.fortios 1.0.6
pureport.pureport 0.0.8
sensu.sensu_go 1.3.0
Run with ``-vvv`` to display more detailed information.
To list a specific collection, pass a valid fully qualified collection name (FQCN) to the command ``ansible-galaxy collection list``. All instances of the collection will be listed.
.. code-block:: shell
> ansible-galaxy collection list fortinet.fortios
# /home/astark/.ansible/collections/ansible_collections
Collection Version
---------------- -------
fortinet.fortios 1.0.1
# /usr/share/ansible/collections/ansible_collections
Collection Version
---------------- -------
fortinet.fortios 1.0.6
To search other paths for collections, use the ``-p`` option. Specify multiple search paths by separating them with a ``:``. The list of paths specified on the command line will be added to the beginning of the configured collections search paths.
.. code-block:: shell
> ansible-galaxy collection list -p '/opt/ansible/collections:/etc/ansible/collections'
# /opt/ansible/collections/ansible_collections
Collection Version
--------------- -------
sandwiches.club 1.7.2
# /etc/ansible/collections/ansible_collections
Collection Version
-------------- -------
sandwiches.pbj 1.2.0
# /home/astark/.ansible/collections/ansible_collections
Collection Version
-------------------------- -------
cisco.aci 0.0.5
cisco.mso 0.0.4
fortinet.fortios 1.0.1
sandwiches.ham *
splunk.enterprise_security 0.0.5
# /usr/share/ansible/collections/ansible_collections
Collection Version
----------------- -------
fortinet.fortios 1.0.6
pureport.pureport 0.0.8
sensu.sensu_go 1.3.0
.. _using_collections:
Verifying collections
=====================
Verifying collections with ``ansible-galaxy``
---------------------------------------------
Once installed, you can verify that the content of the installed collection matches the content of the collection on the server. This feature expects that the collection is installed in one of the configured collection paths and that the collection exists on one of the configured galaxy servers.
.. code-block:: bash
ansible-galaxy collection verify my_namespace.my_collection
The output of the ``ansible-galaxy collection verify`` command is quiet if it is successful. If a collection has been modified, the altered files are listed under the collection name.
.. code-block:: bash
ansible-galaxy collection verify my_namespace.my_collection
Collection my_namespace.my_collection contains modified content in the following files:
my_namespace.my_collection
plugins/inventory/my_inventory.py
plugins/modules/my_module.py
You can use the ``-vvv`` flag to display additional information, such as the version and path of the installed collection, the URL of the remote collection used for validation, and successful verification output.
.. code-block:: bash
ansible-galaxy collection verify my_namespace.my_collection -vvv
...
Verifying 'my_namespace.my_collection:1.0.0'.
Installed collection found at '/path/to/ansible_collections/my_namespace/my_collection/'
Remote collection found at 'https://galaxy.ansible.com/download/my_namespace-my_collection-1.0.0.tar.gz'
Successfully verified that checksums for 'my_namespace.my_collection:1.0.0' match the remote collection
If you have a pre-release or non-latest version of a collection installed you should include the specific version to verify. If the version is omitted, the installed collection is verified against the latest version available on the server.
.. code-block:: bash
ansible-galaxy collection verify my_namespace.my_collection:1.0.0
In addition to the ``namespace.collection_name:version`` format, you can provide the collections to verify in a ``requirements.yml`` file. Dependencies listed in ``requirements.yml`` are not included in the verify process and should be verified separately.
.. code-block:: bash
ansible-galaxy collection verify -r requirements.yml
Verifying against ``tar.gz`` files is not supported. If your ``requirements.yml`` contains paths to tar files or URLs for installation, you can use the ``--ignore-errors`` flag to ensure that all collections using the ``namespace.name`` format in the file are processed.
.. _collections_using_playbook:
Using collections in a Playbook
===============================
Once installed, you can reference a collection content by its fully qualified collection name (FQCN):
.. code-block:: yaml
- hosts: all
tasks:
- my_namespace.my_collection.mymodule:
option1: value
This works for roles or any type of plugin distributed within the collection:
.. code-block:: yaml
- hosts: all
tasks:
- import_role:
name: my_namespace.my_collection.role1
- my_namespace.mycollection.mymodule:
option1: value
- debug:
msg: '{{ lookup("my_namespace.my_collection.lookup1", 'param1')| my_namespace.my_collection.filter1 }}'
Simplifying module names with the ``collections`` keyword
=========================================================
The ``collections`` keyword lets you define a list of collections that your role or playbook should search for unqualified module and action names. So you can use the ``collections`` keyword, then simply refer to modules and action plugins by their short-form names throughout that role or playbook.
.. warning::
If your playbook uses both the ``collections`` keyword and one or more roles, the roles do not inherit the collections set by the playbook. See below for details.
Using ``collections`` in roles
------------------------------
Within a role, you can control which collections Ansible searches for the tasks inside the role using the ``collections`` keyword in the role's ``meta/main.yml``. Ansible will use the collections list defined inside the role even if the playbook that calls the role defines different collections in a separate ``collections`` keyword entry. Roles defined inside a collection always implicitly search their own collection first, so you don't need to use the ``collections`` keyword to access modules, actions, or other roles contained in the same collection.
.. code-block:: yaml
# myrole/meta/main.yml
collections:
- my_namespace.first_collection
- my_namespace.second_collection
- other_namespace.other_collection
Using ``collections`` in playbooks
----------------------------------
In a playbook, you can control the collections Ansible searches for modules and action plugins to execute. However, any roles you call in your playbook define their own collections search order; they do not inherit the calling playbook's settings. This is true even if the role does not define its own ``collections`` keyword.
.. code-block:: yaml
- hosts: all
collections:
- my_namespace.my_collection
tasks:
- import_role:
name: role1
- mymodule:
option1: value
- debug:
msg: '{{ lookup("my_namespace.my_collection.lookup1", 'param1')| my_namespace.my_collection.filter1 }}'
The ``collections`` keyword merely creates an ordered 'search path' for non-namespaced plugin and role references. It does not install content or otherwise change Ansible's behavior around the loading of plugins or roles. Note that an FQCN is still required for non-action or module plugins (e.g., lookups, filters, tests).
.. seealso::
:ref:`developing_collections`
Develop or modify a collection.
:ref:`collections_galaxy_meta`
Understand the collections metadata structure.
`Mailing List <https://groups.google.com/group/ansible-devel>`_
The development mailing list
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,490 |
ansible-galaxy collection list should examine galaxy.yml for information
|
##### SUMMARY
The list subcommand throws a lot of warnings about a missing version if the collection was not built and installed via an artifact. This information can often be obtained in the galaxy.yml though, so it would make sense to fallback to that ...
```
(ansible_base) [vagrant@centos8 ~]$ ansible-galaxy collection list
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/azure/azcollection' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cyberark/bizdev' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/google/cloud' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/netbox_community/ansible_modules' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/splunk/enterprise_security' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/ansible/netcommon' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/check_point/mgmt' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/f5networks/f5_modules' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/ibm/qradar' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/openstack/cloud' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/vyos/vyos' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/arista/eos' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/iosxr' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/ucs' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/aci' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/meraki' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/intersight' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/mso' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/ios' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/nxos' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/fortinet/fortios' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/junipernetworks/junos' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/purestorage/flasharray' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/purestorage/flashblade' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/awx/awx' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/community/grafana' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/community/kubernetes' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/community/amazon' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/community/vmware' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/community/general' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/gavinfish/azuretest' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/netapp/aws' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/netapp/elementsw' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/netapp/ontap' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/servicenow/servicenow' does not have a MANIFEST.json file, cannot detect version.
# /home/vagrant/.ansible/collections/ansible_collections
Collection Version
-------------------------------- -------
ansible.netcommon *
arista.eos *
awx.awx *
azure.azcollection *
check_point.mgmt *
cisco.aci *
cisco.intersight *
cisco.ios *
cisco.iosxr *
cisco.meraki *
cisco.mso *
cisco.nxos *
cisco.ucs *
community.amazon *
community.general *
community.grafana *
community.kubernetes *
community.vmware *
cyberark.bizdev *
f5networks.f5_modules *
fortinet.fortios *
gavinfish.azuretest *
google.cloud *
ibm.qradar *
junipernetworks.junos *
netapp.aws *
netapp.elementsw *
netapp.ontap *
netbox_community.ansible_modules *
openstack.cloud *
purestorage.flasharray *
purestorage.flashblade *
servicenow.servicenow *
splunk.enterprise_security *
vyos.vyos *
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy collection list
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
|
https://github.com/ansible/ansible/issues/67490
|
https://github.com/ansible/ansible/pull/68925
|
343ffaa18b63c92e182b16c3ad84b8d81ca4df69
|
55e29a1464fef700671096dd99bcae89e574ff2f
| 2020-02-17T19:27:40Z |
python
| 2020-05-14T16:28:08Z |
lib/ansible/cli/galaxy.py
|
# Copyright: (c) 2013, James Cammarata <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os.path
import re
import shutil
import textwrap
import time
import yaml
from yaml.error import YAMLError
import ansible.constants as C
from ansible import context
from ansible.cli import CLI
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.galaxy import Galaxy, get_collections_galaxy_meta_info
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection import (
build_collection,
CollectionRequirement,
download_collections,
find_existing_collections,
install_collections,
publish_collection,
validate_collection_name,
validate_collection_path,
verify_collections
)
from ansible.galaxy.login import GalaxyLogin
from ansible.galaxy.role import GalaxyRole
from ansible.galaxy.token import BasicAuthToken, GalaxyToken, KeycloakToken, NoTokenSentinel
from ansible.module_utils.ansible_release import __version__ as ansible_version
from ansible.module_utils.common.collections import is_iterable
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils import six
from ansible.parsing.dataloader import DataLoader
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.playbook.role.requirement import RoleRequirement
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.plugin_docs import get_versioned_doclink
display = Display()
urlparse = six.moves.urllib.parse.urlparse
def _display_header(path, h1, h2, w1=10, w2=7):
display.display('\n# {0}\n{1:{cwidth}} {2:{vwidth}}\n{3} {4}\n'.format(
path,
h1,
h2,
'-' * max([len(h1), w1]), # Make sure that the number of dashes is at least the width of the header
'-' * max([len(h2), w2]),
cwidth=w1,
vwidth=w2,
))
def _display_role(gr):
install_info = gr.install_info
version = None
if install_info:
version = install_info.get("version", None)
if not version:
version = "(unknown version)"
display.display("- %s, %s" % (gr.name, version))
def _display_collection(collection, cwidth=10, vwidth=7, min_cwidth=10, min_vwidth=7):
display.display('{fqcn:{cwidth}} {version:{vwidth}}'.format(
fqcn=to_text(collection),
version=collection.latest_version,
cwidth=max(cwidth, min_cwidth), # Make sure the width isn't smaller than the header
vwidth=max(vwidth, min_vwidth)
))
def _get_collection_widths(collections):
if is_iterable(collections):
fqcn_set = set(to_text(c) for c in collections)
version_set = set(to_text(c.latest_version) for c in collections)
else:
fqcn_set = set([to_text(collections)])
version_set = set([collections.latest_version])
fqcn_length = len(max(fqcn_set, key=len))
version_length = len(max(version_set, key=len))
return fqcn_length, version_length
class GalaxyCLI(CLI):
'''command to manage Ansible roles in shared repositories, the default of which is Ansible Galaxy *https://galaxy.ansible.com*.'''
SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url")
def __init__(self, args):
# Inject role into sys.argv[1] as a backwards compatibility step
if len(args) > 1 and args[1] not in ['-h', '--help', '--version'] and 'role' not in args and 'collection' not in args:
# TODO: Should we add a warning here and eventually deprecate the implicit role subcommand choice
# Remove this in Ansible 2.13 when we also remove -v as an option on the root parser for ansible-galaxy.
idx = 2 if args[1].startswith('-v') else 1
args.insert(idx, 'role')
self.api_servers = []
self.galaxy = None
super(GalaxyCLI, self).__init__(args)
def init_parser(self):
''' create an options parser for bin/ansible '''
super(GalaxyCLI, self).init_parser(
desc="Perform various Role and Collection related operations.",
)
# Common arguments that apply to more than 1 action
common = opt_help.argparse.ArgumentParser(add_help=False)
common.add_argument('-s', '--server', dest='api_server', help='The Galaxy API server URL')
common.add_argument('--token', '--api-key', dest='api_key',
help='The Ansible Galaxy API key which can be found at '
'https://galaxy.ansible.com/me/preferences. You can also use ansible-galaxy login to '
'retrieve this key or set the token for the GALAXY_SERVER_LIST entry.')
common.add_argument('-c', '--ignore-certs', action='store_true', dest='ignore_certs',
default=C.GALAXY_IGNORE_CERTS, help='Ignore SSL certificate validation errors.')
opt_help.add_verbosity_options(common)
force = opt_help.argparse.ArgumentParser(add_help=False)
force.add_argument('-f', '--force', dest='force', action='store_true', default=False,
help='Force overwriting an existing role or collection')
github = opt_help.argparse.ArgumentParser(add_help=False)
github.add_argument('github_user', help='GitHub username')
github.add_argument('github_repo', help='GitHub repository')
offline = opt_help.argparse.ArgumentParser(add_help=False)
offline.add_argument('--offline', dest='offline', default=False, action='store_true',
help="Don't query the galaxy API when creating roles")
default_roles_path = C.config.get_configuration_definition('DEFAULT_ROLES_PATH').get('default', '')
roles_path = opt_help.argparse.ArgumentParser(add_help=False)
roles_path.add_argument('-p', '--roles-path', dest='roles_path', type=opt_help.unfrack_path(pathsep=True),
default=C.DEFAULT_ROLES_PATH, action=opt_help.PrependListAction,
help='The path to the directory containing your roles. The default is the first '
'writable one configured via DEFAULT_ROLES_PATH: %s ' % default_roles_path)
collections_path = opt_help.argparse.ArgumentParser(add_help=False)
collections_path.add_argument('-p', '--collection-path', dest='collections_path', type=opt_help.unfrack_path(pathsep=True),
default=C.COLLECTIONS_PATHS, action=opt_help.PrependListAction,
help="One or more directories to search for collections in addition "
"to the default COLLECTIONS_PATHS. Separate multiple paths "
"with '{0}'.".format(os.path.pathsep))
# Add sub parser for the Galaxy role type (role or collection)
type_parser = self.parser.add_subparsers(metavar='TYPE', dest='type')
type_parser.required = True
# Add sub parser for the Galaxy collection actions
collection = type_parser.add_parser('collection', help='Manage an Ansible Galaxy collection.')
collection_parser = collection.add_subparsers(metavar='COLLECTION_ACTION', dest='action')
collection_parser.required = True
self.add_download_options(collection_parser, parents=[common])
self.add_init_options(collection_parser, parents=[common, force])
self.add_build_options(collection_parser, parents=[common, force])
self.add_publish_options(collection_parser, parents=[common])
self.add_install_options(collection_parser, parents=[common, force])
self.add_list_options(collection_parser, parents=[common, collections_path])
self.add_verify_options(collection_parser, parents=[common, collections_path])
# Add sub parser for the Galaxy role actions
role = type_parser.add_parser('role', help='Manage an Ansible Galaxy role.')
role_parser = role.add_subparsers(metavar='ROLE_ACTION', dest='action')
role_parser.required = True
self.add_init_options(role_parser, parents=[common, force, offline])
self.add_remove_options(role_parser, parents=[common, roles_path])
self.add_delete_options(role_parser, parents=[common, github])
self.add_list_options(role_parser, parents=[common, roles_path])
self.add_search_options(role_parser, parents=[common])
self.add_import_options(role_parser, parents=[common, github])
self.add_setup_options(role_parser, parents=[common, roles_path])
self.add_login_options(role_parser, parents=[common])
self.add_info_options(role_parser, parents=[common, roles_path, offline])
self.add_install_options(role_parser, parents=[common, force, roles_path])
def add_download_options(self, parser, parents=None):
download_parser = parser.add_parser('download', parents=parents,
help='Download collections and their dependencies as a tarball for an '
'offline install.')
download_parser.set_defaults(func=self.execute_download)
download_parser.add_argument('args', help='Collection(s)', metavar='collection', nargs='*')
download_parser.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download collection(s) listed as dependencies.")
download_parser.add_argument('-p', '--download-path', dest='download_path',
default='./collections',
help='The directory to download the collections to.')
download_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be downloaded.')
download_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
def add_init_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
init_parser = parser.add_parser('init', parents=parents,
help='Initialize new {0} with the base structure of a '
'{0}.'.format(galaxy_type))
init_parser.set_defaults(func=self.execute_init)
init_parser.add_argument('--init-path', dest='init_path', default='./',
help='The path in which the skeleton {0} will be created. The default is the '
'current working directory.'.format(galaxy_type))
init_parser.add_argument('--{0}-skeleton'.format(galaxy_type), dest='{0}_skeleton'.format(galaxy_type),
default=C.GALAXY_ROLE_SKELETON,
help='The path to a {0} skeleton that the new {0} should be based '
'upon.'.format(galaxy_type))
obj_name_kwargs = {}
if galaxy_type == 'collection':
obj_name_kwargs['type'] = validate_collection_name
init_parser.add_argument('{0}_name'.format(galaxy_type), help='{0} name'.format(galaxy_type.capitalize()),
**obj_name_kwargs)
if galaxy_type == 'role':
init_parser.add_argument('--type', dest='role_type', action='store', default='default',
help="Initialize using an alternate role type. Valid types include: 'container', "
"'apb' and 'network'.")
def add_remove_options(self, parser, parents=None):
remove_parser = parser.add_parser('remove', parents=parents, help='Delete roles from roles_path.')
remove_parser.set_defaults(func=self.execute_remove)
remove_parser.add_argument('args', help='Role(s)', metavar='role', nargs='+')
def add_delete_options(self, parser, parents=None):
delete_parser = parser.add_parser('delete', parents=parents,
help='Removes the role from Galaxy. It does not remove or alter the actual '
'GitHub repository.')
delete_parser.set_defaults(func=self.execute_delete)
def add_list_options(self, parser, parents=None):
galaxy_type = 'role'
if parser.metavar == 'COLLECTION_ACTION':
galaxy_type = 'collection'
list_parser = parser.add_parser('list', parents=parents,
help='Show the name and version of each {0} installed in the {0}s_path.'.format(galaxy_type))
list_parser.set_defaults(func=self.execute_list)
list_parser.add_argument(galaxy_type, help=galaxy_type.capitalize(), nargs='?', metavar=galaxy_type)
def add_search_options(self, parser, parents=None):
search_parser = parser.add_parser('search', parents=parents,
help='Search the Galaxy database by tags, platforms, author and multiple '
'keywords.')
search_parser.set_defaults(func=self.execute_search)
search_parser.add_argument('--platforms', dest='platforms', help='list of OS platforms to filter by')
search_parser.add_argument('--galaxy-tags', dest='galaxy_tags', help='list of galaxy tags to filter by')
search_parser.add_argument('--author', dest='author', help='GitHub username')
search_parser.add_argument('args', help='Search terms', metavar='searchterm', nargs='*')
def add_import_options(self, parser, parents=None):
import_parser = parser.add_parser('import', parents=parents, help='Import a role into a galaxy server')
import_parser.set_defaults(func=self.execute_import)
import_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import results.")
import_parser.add_argument('--branch', dest='reference',
help='The name of a branch to import. Defaults to the repository\'s default branch '
'(usually master)')
import_parser.add_argument('--role-name', dest='role_name',
help='The name the role should have, if different than the repo name')
import_parser.add_argument('--status', dest='check_status', action='store_true', default=False,
help='Check the status of the most recent import request for given github_'
'user/github_repo.')
def add_setup_options(self, parser, parents=None):
setup_parser = parser.add_parser('setup', parents=parents,
help='Manage the integration between Galaxy and the given source.')
setup_parser.set_defaults(func=self.execute_setup)
setup_parser.add_argument('--remove', dest='remove_id', default=None,
help='Remove the integration matching the provided ID value. Use --list to see '
'ID values.')
setup_parser.add_argument('--list', dest="setup_list", action='store_true', default=False,
help='List all of your integrations.')
setup_parser.add_argument('source', help='Source')
setup_parser.add_argument('github_user', help='GitHub username')
setup_parser.add_argument('github_repo', help='GitHub repository')
setup_parser.add_argument('secret', help='Secret')
def add_login_options(self, parser, parents=None):
login_parser = parser.add_parser('login', parents=parents,
help="Login to api.github.com server in order to use ansible-galaxy role sub "
"command such as 'import', 'delete', 'publish', and 'setup'")
login_parser.set_defaults(func=self.execute_login)
login_parser.add_argument('--github-token', dest='token', default=None,
help='Identify with github token rather than username and password.')
def add_info_options(self, parser, parents=None):
info_parser = parser.add_parser('info', parents=parents, help='View more details about a specific role.')
info_parser.set_defaults(func=self.execute_info)
info_parser.add_argument('args', nargs='+', help='role', metavar='role_name[,version]')
def add_verify_options(self, parser, parents=None):
galaxy_type = 'collection'
verify_parser = parser.add_parser('verify', parents=parents, help='Compare checksums with the collection(s) '
'found on the server and the installed copy. This does not verify dependencies.')
verify_parser.set_defaults(func=self.execute_verify)
verify_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', help='The collection(s) name or '
'path/url to a tar.gz collection artifact. This is mutually exclusive with --requirements-file.')
verify_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help='Ignore errors during verification and continue with the next specified collection.')
verify_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be verified.')
def add_install_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
args_kwargs = {}
if galaxy_type == 'collection':
args_kwargs['help'] = 'The collection(s) name or path/url to a tar.gz collection artifact. This is ' \
'mutually exclusive with --requirements-file.'
ignore_errors_help = 'Ignore errors during installation and continue with the next specified ' \
'collection. This will not ignore dependency conflict errors.'
else:
args_kwargs['help'] = 'Role name, URL or tar file'
ignore_errors_help = 'Ignore errors and continue with the next specified role.'
install_parser = parser.add_parser('install', parents=parents,
help='Install {0}(s) from file(s), URL(s) or Ansible '
'Galaxy'.format(galaxy_type))
install_parser.set_defaults(func=self.execute_install)
install_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', **args_kwargs)
install_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help=ignore_errors_help)
install_exclusive = install_parser.add_mutually_exclusive_group()
install_exclusive.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download {0}s listed as dependencies.".format(galaxy_type))
install_exclusive.add_argument('--force-with-deps', dest='force_with_deps', action='store_true', default=False,
help="Force overwriting an existing {0} and its "
"dependencies.".format(galaxy_type))
if galaxy_type == 'collection':
install_parser.add_argument('-p', '--collections-path', dest='collections_path',
default=C.COLLECTIONS_PATHS[0],
help='The path to the directory containing your collections.')
install_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be installed.')
install_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
else:
install_parser.add_argument('-r', '--role-file', dest='role_file',
help='A file containing a list of roles to be imported.')
install_parser.add_argument('-g', '--keep-scm-meta', dest='keep_scm_meta', action='store_true',
default=False,
help='Use tar instead of the scm archive option when packaging the role.')
def add_build_options(self, parser, parents=None):
build_parser = parser.add_parser('build', parents=parents,
help='Build an Ansible collection artifact that can be publish to Ansible '
'Galaxy.')
build_parser.set_defaults(func=self.execute_build)
build_parser.add_argument('args', metavar='collection', nargs='*', default=('.',),
help='Path to the collection(s) directory to build. This should be the directory '
'that contains the galaxy.yml file. The default is the current working '
'directory.')
build_parser.add_argument('--output-path', dest='output_path', default='./',
help='The path in which the collection is built to. The default is the current '
'working directory.')
def add_publish_options(self, parser, parents=None):
publish_parser = parser.add_parser('publish', parents=parents,
help='Publish a collection artifact to Ansible Galaxy.')
publish_parser.set_defaults(func=self.execute_publish)
publish_parser.add_argument('args', metavar='collection_path',
help='The path to the collection tarball to publish.')
publish_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import validation results.")
publish_parser.add_argument('--import-timeout', dest='import_timeout', type=int, default=0,
help="The time to wait for the collection import process to finish.")
def post_process_args(self, options):
options = super(GalaxyCLI, self).post_process_args(options)
display.verbosity = options.verbosity
return options
def run(self):
super(GalaxyCLI, self).run()
self.galaxy = Galaxy()
def server_config_def(section, key, required):
return {
'description': 'The %s of the %s Galaxy server' % (key, section),
'ini': [
{
'section': 'galaxy_server.%s' % section,
'key': key,
}
],
'env': [
{'name': 'ANSIBLE_GALAXY_SERVER_%s_%s' % (section.upper(), key.upper())},
],
'required': required,
}
server_def = [('url', True), ('username', False), ('password', False), ('token', False),
('auth_url', False)]
validate_certs = not context.CLIARGS['ignore_certs']
config_servers = []
# Need to filter out empty strings or non truthy values as an empty server list env var is equal to [''].
server_list = [s for s in C.GALAXY_SERVER_LIST or [] if s]
for server_key in server_list:
# Config definitions are looked up dynamically based on the C.GALAXY_SERVER_LIST entry. We look up the
# section [galaxy_server.<server>] for the values url, username, password, and token.
config_dict = dict((k, server_config_def(server_key, k, req)) for k, req in server_def)
defs = AnsibleLoader(yaml.safe_dump(config_dict)).get_single_data()
C.config.initialize_plugin_configuration_definitions('galaxy_server', server_key, defs)
server_options = C.config.get_plugin_options('galaxy_server', server_key)
# auth_url is used to create the token, but not directly by GalaxyAPI, so
# it doesn't need to be passed as kwarg to GalaxyApi
auth_url = server_options.pop('auth_url', None)
token_val = server_options['token'] or NoTokenSentinel
username = server_options['username']
# default case if no auth info is provided.
server_options['token'] = None
if username:
server_options['token'] = BasicAuthToken(username,
server_options['password'])
else:
if token_val:
if auth_url:
server_options['token'] = KeycloakToken(access_token=token_val,
auth_url=auth_url,
validate_certs=validate_certs)
else:
# The galaxy v1 / github / django / 'Token'
server_options['token'] = GalaxyToken(token=token_val)
server_options['validate_certs'] = validate_certs
config_servers.append(GalaxyAPI(self.galaxy, server_key, **server_options))
cmd_server = context.CLIARGS['api_server']
cmd_token = GalaxyToken(token=context.CLIARGS['api_key'])
if cmd_server:
# Cmd args take precedence over the config entry but fist check if the arg was a name and use that config
# entry, otherwise create a new API entry for the server specified.
config_server = next((s for s in config_servers if s.name == cmd_server), None)
if config_server:
self.api_servers.append(config_server)
else:
self.api_servers.append(GalaxyAPI(self.galaxy, 'cmd_arg', cmd_server, token=cmd_token,
validate_certs=validate_certs))
else:
self.api_servers = config_servers
# Default to C.GALAXY_SERVER if no servers were defined
if len(self.api_servers) == 0:
self.api_servers.append(GalaxyAPI(self.galaxy, 'default', C.GALAXY_SERVER, token=cmd_token,
validate_certs=validate_certs))
context.CLIARGS['func']()
@property
def api(self):
return self.api_servers[0]
def _parse_requirements_file(self, requirements_file, allow_old_format=True):
"""
Parses an Ansible requirement.yml file and returns all the roles and/or collections defined in it. There are 2
requirements file format:
# v1 (roles only)
- src: The source of the role, required if include is not set. Can be Galaxy role name, URL to a SCM repo or tarball.
name: Downloads the role to the specified name, defaults to Galaxy name from Galaxy or name of repo if src is a URL.
scm: If src is a URL, specify the SCM. Only git or hd are supported and defaults ot git.
version: The version of the role to download. Can also be tag, commit, or branch name and defaults to master.
include: Path to additional requirements.yml files.
# v2 (roles and collections)
---
roles:
# Same as v1 format just under the roles key
collections:
- namespace.collection
- name: namespace.collection
version: version identifier, multiple identifiers are separated by ','
source: the URL or a predefined source name that relates to C.GALAXY_SERVER_LIST
:param requirements_file: The path to the requirements file.
:param allow_old_format: Will fail if a v1 requirements file is found and this is set to False.
:return: a dict containing roles and collections to found in the requirements file.
"""
requirements = {
'roles': [],
'collections': [],
}
b_requirements_file = to_bytes(requirements_file, errors='surrogate_or_strict')
if not os.path.exists(b_requirements_file):
raise AnsibleError("The requirements file '%s' does not exist." % to_native(requirements_file))
display.vvv("Reading requirement file at '%s'" % requirements_file)
with open(b_requirements_file, 'rb') as req_obj:
try:
file_requirements = yaml.safe_load(req_obj)
except YAMLError as err:
raise AnsibleError(
"Failed to parse the requirements yml at '%s' with the following error:\n%s"
% (to_native(requirements_file), to_native(err)))
if file_requirements is None:
raise AnsibleError("No requirements found in file '%s'" % to_native(requirements_file))
def parse_role_req(requirement):
if "include" not in requirement:
role = RoleRequirement.role_yaml_parse(requirement)
display.vvv("found role %s in yaml file" % to_text(role))
if "name" not in role and "src" not in role:
raise AnsibleError("Must specify name or src for role")
return [GalaxyRole(self.galaxy, self.api, **role)]
else:
b_include_path = to_bytes(requirement["include"], errors="surrogate_or_strict")
if not os.path.isfile(b_include_path):
raise AnsibleError("Failed to find include requirements file '%s' in '%s'"
% (to_native(b_include_path), to_native(requirements_file)))
with open(b_include_path, 'rb') as f_include:
try:
return [GalaxyRole(self.galaxy, self.api, **r) for r in
(RoleRequirement.role_yaml_parse(i) for i in yaml.safe_load(f_include))]
except Exception as e:
raise AnsibleError("Unable to load data from include requirements file: %s %s"
% (to_native(requirements_file), to_native(e)))
if isinstance(file_requirements, list):
# Older format that contains only roles
if not allow_old_format:
raise AnsibleError("Expecting requirements file to be a dict with the key 'collections' that contains "
"a list of collections to install")
for role_req in file_requirements:
requirements['roles'] += parse_role_req(role_req)
else:
# Newer format with a collections and/or roles key
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
if extra_keys:
raise AnsibleError("Expecting only 'roles' and/or 'collections' as base keys in the requirements "
"file. Found: %s" % (to_native(", ".join(extra_keys))))
for role_req in file_requirements.get('roles') or []:
requirements['roles'] += parse_role_req(role_req)
for collection_req in file_requirements.get('collections') or []:
if isinstance(collection_req, dict):
req_name = collection_req.get('name', None)
if req_name is None:
raise AnsibleError("Collections requirement entry should contain the key name.")
req_version = collection_req.get('version', '*')
req_source = collection_req.get('source', None)
if req_source:
# Try and match up the requirement source with our list of Galaxy API servers defined in the
# config, otherwise create a server with that URL without any auth.
req_source = next(iter([a for a in self.api_servers if req_source in [a.name, a.api_server]]),
GalaxyAPI(self.galaxy,
"explicit_requirement_%s" % req_name,
req_source,
validate_certs=not context.CLIARGS['ignore_certs']))
requirements['collections'].append((req_name, req_version, req_source))
else:
requirements['collections'].append((collection_req, '*', None))
return requirements
@staticmethod
def exit_without_ignore(rc=1):
"""
Exits with the specified return code unless the
option --ignore-errors was specified
"""
if not context.CLIARGS['ignore_errors']:
raise AnsibleError('- you can use --ignore-errors to skip failed roles and finish processing the list.')
@staticmethod
def _display_role_info(role_info):
text = [u"", u"Role: %s" % to_text(role_info['name'])]
# Get the top-level 'description' first, falling back to galaxy_info['galaxy_info']['description'].
galaxy_info = role_info.get('galaxy_info', {})
description = role_info.get('description', galaxy_info.get('description', ''))
text.append(u"\tdescription: %s" % description)
for k in sorted(role_info.keys()):
if k in GalaxyCLI.SKIP_INFO_KEYS:
continue
if isinstance(role_info[k], dict):
text.append(u"\t%s:" % (k))
for key in sorted(role_info[k].keys()):
if key in GalaxyCLI.SKIP_INFO_KEYS:
continue
text.append(u"\t\t%s: %s" % (key, role_info[k][key]))
else:
text.append(u"\t%s: %s" % (k, role_info[k]))
return u'\n'.join(text)
@staticmethod
def _resolve_path(path):
return os.path.abspath(os.path.expanduser(os.path.expandvars(path)))
@staticmethod
def _get_skeleton_galaxy_yml(template_path, inject_data):
with open(to_bytes(template_path, errors='surrogate_or_strict'), 'rb') as template_obj:
meta_template = to_text(template_obj.read(), errors='surrogate_or_strict')
galaxy_meta = get_collections_galaxy_meta_info()
required_config = []
optional_config = []
for meta_entry in galaxy_meta:
config_list = required_config if meta_entry.get('required', False) else optional_config
value = inject_data.get(meta_entry['key'], None)
if not value:
meta_type = meta_entry.get('type', 'str')
if meta_type == 'str':
value = ''
elif meta_type == 'list':
value = []
elif meta_type == 'dict':
value = {}
meta_entry['value'] = value
config_list.append(meta_entry)
link_pattern = re.compile(r"L\(([^)]+),\s+([^)]+)\)")
const_pattern = re.compile(r"C\(([^)]+)\)")
def comment_ify(v):
if isinstance(v, list):
v = ". ".join([l.rstrip('.') for l in v])
v = link_pattern.sub(r"\1 <\2>", v)
v = const_pattern.sub(r"'\1'", v)
return textwrap.fill(v, width=117, initial_indent="# ", subsequent_indent="# ", break_on_hyphens=False)
loader = DataLoader()
templar = Templar(loader, variables={'required_config': required_config, 'optional_config': optional_config})
templar.environment.filters['comment_ify'] = comment_ify
meta_value = templar.template(meta_template)
return meta_value
def _require_one_of_collections_requirements(self, collections, requirements_file):
if collections and requirements_file:
raise AnsibleError("The positional collection_name arg and --requirements-file are mutually exclusive.")
elif not collections and not requirements_file:
raise AnsibleError("You must specify a collection name or a requirements file.")
elif requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._parse_requirements_file(requirements_file, allow_old_format=False)['collections']
else:
requirements = []
for collection_input in collections:
requirement = None
if os.path.isfile(to_bytes(collection_input, errors='surrogate_or_strict')) or \
urlparse(collection_input).scheme.lower() in ['http', 'https']:
# Arg is a file path or URL to a collection
name = collection_input
else:
name, dummy, requirement = collection_input.partition(':')
requirements.append((name, requirement or '*', None))
return requirements
############################
# execute actions
############################
def execute_role(self):
"""
Perform the action on an Ansible Galaxy role. Must be combined with a further action like delete/install/init
as listed below.
"""
# To satisfy doc build
pass
def execute_collection(self):
"""
Perform the action on an Ansible Galaxy collection. Must be combined with a further action like init/install as
listed below.
"""
# To satisfy doc build
pass
def execute_build(self):
"""
Build an Ansible Galaxy collection artifact that can be stored in a central repository like Ansible Galaxy.
By default, this command builds from the current working directory. You can optionally pass in the
collection input path (where the ``galaxy.yml`` file is).
"""
force = context.CLIARGS['force']
output_path = GalaxyCLI._resolve_path(context.CLIARGS['output_path'])
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
elif os.path.isfile(b_output_path):
raise AnsibleError("- the output collection directory %s is a file - aborting" % to_native(output_path))
for collection_path in context.CLIARGS['args']:
collection_path = GalaxyCLI._resolve_path(collection_path)
build_collection(collection_path, output_path, force)
def execute_download(self):
collections = context.CLIARGS['args']
no_deps = context.CLIARGS['no_deps']
download_path = context.CLIARGS['download_path']
ignore_certs = context.CLIARGS['ignore_certs']
requirements_file = context.CLIARGS['requirements']
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._require_one_of_collections_requirements(collections, requirements_file)
download_path = GalaxyCLI._resolve_path(download_path)
b_download_path = to_bytes(download_path, errors='surrogate_or_strict')
if not os.path.exists(b_download_path):
os.makedirs(b_download_path)
download_collections(requirements, download_path, self.api_servers, (not ignore_certs), no_deps,
context.CLIARGS['allow_pre_release'])
return 0
def execute_init(self):
"""
Creates the skeleton framework of a role or collection that complies with the Galaxy metadata format.
Requires a role or collection name. The collection name must be in the format ``<namespace>.<collection>``.
"""
galaxy_type = context.CLIARGS['type']
init_path = context.CLIARGS['init_path']
force = context.CLIARGS['force']
obj_skeleton = context.CLIARGS['{0}_skeleton'.format(galaxy_type)]
obj_name = context.CLIARGS['{0}_name'.format(galaxy_type)]
inject_data = dict(
description='your {0} description'.format(galaxy_type),
ansible_plugin_list_dir=get_versioned_doclink('plugins/plugins.html'),
)
if galaxy_type == 'role':
inject_data.update(dict(
author='your name',
company='your company (optional)',
license='license (GPL-2.0-or-later, MIT, etc)',
role_name=obj_name,
role_type=context.CLIARGS['role_type'],
issue_tracker_url='http://example.com/issue/tracker',
repository_url='http://example.com/repository',
documentation_url='http://docs.example.com',
homepage_url='http://example.com',
min_ansible_version=ansible_version[:3], # x.y
dependencies=[],
))
obj_path = os.path.join(init_path, obj_name)
elif galaxy_type == 'collection':
namespace, collection_name = obj_name.split('.', 1)
inject_data.update(dict(
namespace=namespace,
collection_name=collection_name,
version='1.0.0',
readme='README.md',
authors=['your name <[email protected]>'],
license=['GPL-2.0-or-later'],
repository='http://example.com/repository',
documentation='http://docs.example.com',
homepage='http://example.com',
issues='http://example.com/issue/tracker',
build_ignore=[],
))
obj_path = os.path.join(init_path, namespace, collection_name)
b_obj_path = to_bytes(obj_path, errors='surrogate_or_strict')
if os.path.exists(b_obj_path):
if os.path.isfile(obj_path):
raise AnsibleError("- the path %s already exists, but is a file - aborting" % to_native(obj_path))
elif not force:
raise AnsibleError("- the directory %s already exists. "
"You can use --force to re-initialize this directory,\n"
"however it will reset any main.yml files that may have\n"
"been modified there already." % to_native(obj_path))
if obj_skeleton is not None:
own_skeleton = False
skeleton_ignore_expressions = C.GALAXY_ROLE_SKELETON_IGNORE
else:
own_skeleton = True
obj_skeleton = self.galaxy.default_role_skeleton_path
skeleton_ignore_expressions = ['^.*/.git_keep$']
obj_skeleton = os.path.expanduser(obj_skeleton)
skeleton_ignore_re = [re.compile(x) for x in skeleton_ignore_expressions]
if not os.path.exists(obj_skeleton):
raise AnsibleError("- the skeleton path '{0}' does not exist, cannot init {1}".format(
to_native(obj_skeleton), galaxy_type)
)
loader = DataLoader()
templar = Templar(loader, variables=inject_data)
# create role directory
if not os.path.exists(b_obj_path):
os.makedirs(b_obj_path)
for root, dirs, files in os.walk(obj_skeleton, topdown=True):
rel_root = os.path.relpath(root, obj_skeleton)
rel_dirs = rel_root.split(os.sep)
rel_root_dir = rel_dirs[0]
if galaxy_type == 'collection':
# A collection can contain templates in playbooks/*/templates and roles/*/templates
in_templates_dir = rel_root_dir in ['playbooks', 'roles'] and 'templates' in rel_dirs
else:
in_templates_dir = rel_root_dir == 'templates'
dirs = [d for d in dirs if not any(r.match(d) for r in skeleton_ignore_re)]
for f in files:
filename, ext = os.path.splitext(f)
if any(r.match(os.path.join(rel_root, f)) for r in skeleton_ignore_re):
continue
if galaxy_type == 'collection' and own_skeleton and rel_root == '.' and f == 'galaxy.yml.j2':
# Special use case for galaxy.yml.j2 in our own default collection skeleton. We build the options
# dynamically which requires special options to be set.
# The templated data's keys must match the key name but the inject data contains collection_name
# instead of name. We just make a copy and change the key back to name for this file.
template_data = inject_data.copy()
template_data['name'] = template_data.pop('collection_name')
meta_value = GalaxyCLI._get_skeleton_galaxy_yml(os.path.join(root, rel_root, f), template_data)
b_dest_file = to_bytes(os.path.join(obj_path, rel_root, filename), errors='surrogate_or_strict')
with open(b_dest_file, 'wb') as galaxy_obj:
galaxy_obj.write(to_bytes(meta_value, errors='surrogate_or_strict'))
elif ext == ".j2" and not in_templates_dir:
src_template = os.path.join(root, f)
dest_file = os.path.join(obj_path, rel_root, filename)
template_data = to_text(loader._get_file_contents(src_template)[0], errors='surrogate_or_strict')
b_rendered = to_bytes(templar.template(template_data), errors='surrogate_or_strict')
with open(dest_file, 'wb') as df:
df.write(b_rendered)
else:
f_rel_path = os.path.relpath(os.path.join(root, f), obj_skeleton)
shutil.copyfile(os.path.join(root, f), os.path.join(obj_path, f_rel_path))
for d in dirs:
b_dir_path = to_bytes(os.path.join(obj_path, rel_root, d), errors='surrogate_or_strict')
if not os.path.exists(b_dir_path):
os.makedirs(b_dir_path)
display.display("- %s %s was created successfully" % (galaxy_type.title(), obj_name))
def execute_info(self):
"""
prints out detailed information about an installed role as well as info available from the galaxy API.
"""
roles_path = context.CLIARGS['roles_path']
data = ''
for role in context.CLIARGS['args']:
role_info = {'path': roles_path}
gr = GalaxyRole(self.galaxy, self.api, role)
if not gr._exists:
data = u"- the role %s was not found" % role
break
install_info = gr.install_info
if install_info:
if 'version' in install_info:
install_info['installed_version'] = install_info['version']
del install_info['version']
role_info.update(install_info)
remote_data = False
if not context.CLIARGS['offline']:
remote_data = self.api.lookup_role_by_name(role, False)
if remote_data:
role_info.update(remote_data)
if gr.metadata:
role_info.update(gr.metadata)
req = RoleRequirement()
role_spec = req.role_yaml_parse({'role': role})
if role_spec:
role_info.update(role_spec)
data = self._display_role_info(role_info)
self.pager(data)
def execute_verify(self):
collections = context.CLIARGS['args']
search_paths = context.CLIARGS['collections_path']
ignore_certs = context.CLIARGS['ignore_certs']
ignore_errors = context.CLIARGS['ignore_errors']
requirements_file = context.CLIARGS['requirements']
requirements = self._require_one_of_collections_requirements(collections, requirements_file)
resolved_paths = [validate_collection_path(GalaxyCLI._resolve_path(path)) for path in search_paths]
verify_collections(requirements, resolved_paths, self.api_servers, (not ignore_certs), ignore_errors,
allow_pre_release=True)
return 0
def execute_install(self):
"""
Install one or more roles(``ansible-galaxy role install``), or one or more collections(``ansible-galaxy collection install``).
You can pass in a list (roles or collections) or use the file
option listed below (these are mutually exclusive). If you pass in a list, it
can be a name (which will be downloaded via the galaxy API and github), or it can be a local tar archive file.
"""
if context.CLIARGS['type'] == 'collection':
collections = context.CLIARGS['args']
force = context.CLIARGS['force']
output_path = context.CLIARGS['collections_path']
ignore_certs = context.CLIARGS['ignore_certs']
ignore_errors = context.CLIARGS['ignore_errors']
requirements_file = context.CLIARGS['requirements']
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
if collections and requirements_file:
raise AnsibleError("The positional collection_name arg and --requirements-file are mutually exclusive.")
elif not collections and not requirements_file:
raise AnsibleError("You must specify a collection name or a requirements file.")
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._require_one_of_collections_requirements(collections, requirements_file)
output_path = GalaxyCLI._resolve_path(output_path)
collections_path = C.COLLECTIONS_PATHS
if len([p for p in collections_path if p.startswith(output_path)]) == 0:
display.warning("The specified collections path '%s' is not part of the configured Ansible "
"collections paths '%s'. The installed collection won't be picked up in an Ansible "
"run." % (to_text(output_path), to_text(":".join(collections_path))))
output_path = validate_collection_path(output_path)
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
install_collections(requirements, output_path, self.api_servers, (not ignore_certs), ignore_errors,
no_deps, force, force_deps, context.CLIARGS['allow_pre_release'])
return 0
role_file = context.CLIARGS['role_file']
if not context.CLIARGS['args'] and role_file is None:
# the user needs to specify one of either --role-file or specify a single user/role name
raise AnsibleOptionsError("- you must specify a user/role name or a roles file")
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
force = context.CLIARGS['force'] or force_deps
roles_left = []
if role_file:
if not (role_file.endswith('.yaml') or role_file.endswith('.yml')):
raise AnsibleError("Invalid role requirements file, it must end with a .yml or .yaml extension")
roles_left = self._parse_requirements_file(role_file)['roles']
else:
# roles were specified directly, so we'll just go out grab them
# (and their dependencies, unless the user doesn't want us to).
for rname in context.CLIARGS['args']:
role = RoleRequirement.role_yaml_parse(rname.strip())
roles_left.append(GalaxyRole(self.galaxy, self.api, **role))
for role in roles_left:
# only process roles in roles files when names matches if given
if role_file and context.CLIARGS['args'] and role.name not in context.CLIARGS['args']:
display.vvv('Skipping role %s' % role.name)
continue
display.vvv('Processing role %s ' % role.name)
# query the galaxy API for the role data
if role.install_info is not None:
if role.install_info['version'] != role.version or force:
if force:
display.display('- changing role %s from %s to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
role.remove()
else:
display.warning('- %s (%s) is already installed - use --force to change version to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
continue
else:
if not force:
display.display('- %s is already installed, skipping.' % str(role))
continue
try:
installed = role.install()
except AnsibleError as e:
display.warning(u"- %s was NOT installed successfully: %s " % (role.name, to_text(e)))
self.exit_without_ignore()
continue
# install dependencies, if we want them
if not no_deps and installed:
if not role.metadata:
display.warning("Meta file %s is empty. Skipping dependencies." % role.path)
else:
role_dependencies = (role.metadata.get('dependencies') or []) + role.requirements
for dep in role_dependencies:
display.debug('Installing dep %s' % dep)
dep_req = RoleRequirement()
dep_info = dep_req.role_yaml_parse(dep)
dep_role = GalaxyRole(self.galaxy, self.api, **dep_info)
if '.' not in dep_role.name and '.' not in dep_role.src and dep_role.scm is None:
# we know we can skip this, as it's not going to
# be found on galaxy.ansible.com
continue
if dep_role.install_info is None:
if dep_role not in roles_left:
display.display('- adding dependency: %s' % to_text(dep_role))
roles_left.append(dep_role)
else:
display.display('- dependency %s already pending installation.' % dep_role.name)
else:
if dep_role.install_info['version'] != dep_role.version:
if force_deps:
display.display('- changing dependant role %s from %s to %s' %
(dep_role.name, dep_role.install_info['version'], dep_role.version or "unspecified"))
dep_role.remove()
roles_left.append(dep_role)
else:
display.warning('- dependency %s (%s) from role %s differs from already installed version (%s), skipping' %
(to_text(dep_role), dep_role.version, role.name, dep_role.install_info['version']))
else:
if force_deps:
roles_left.append(dep_role)
else:
display.display('- dependency %s is already installed, skipping.' % dep_role.name)
if not installed:
display.warning("- %s was NOT installed successfully." % role.name)
self.exit_without_ignore()
return 0
def execute_remove(self):
"""
removes the list of roles passed as arguments from the local system.
"""
if not context.CLIARGS['args']:
raise AnsibleOptionsError('- you must specify at least one role to remove.')
for role_name in context.CLIARGS['args']:
role = GalaxyRole(self.galaxy, self.api, role_name)
try:
if role.remove():
display.display('- successfully removed %s' % role_name)
else:
display.display('- %s is not installed, skipping.' % role_name)
except Exception as e:
raise AnsibleError("Failed to remove role %s: %s" % (role_name, to_native(e)))
return 0
def execute_list(self):
"""
List installed collections or roles
"""
if context.CLIARGS['type'] == 'role':
self.execute_list_role()
elif context.CLIARGS['type'] == 'collection':
self.execute_list_collection()
def execute_list_role(self):
"""
List all roles installed on the local system or a specific role
"""
path_found = False
role_found = False
warnings = []
roles_search_paths = context.CLIARGS['roles_path']
role_name = context.CLIARGS['role']
for path in roles_search_paths:
role_path = GalaxyCLI._resolve_path(path)
if os.path.isdir(path):
path_found = True
else:
warnings.append("- the configured path {0} does not exist.".format(path))
continue
if role_name:
# show the requested role, if it exists
gr = GalaxyRole(self.galaxy, self.api, role_name, path=os.path.join(role_path, role_name))
if os.path.isdir(gr.path):
role_found = True
display.display('# %s' % os.path.dirname(gr.path))
_display_role(gr)
break
warnings.append("- the role %s was not found" % role_name)
else:
if not os.path.exists(role_path):
warnings.append("- the configured path %s does not exist." % role_path)
continue
if not os.path.isdir(role_path):
warnings.append("- the configured path %s, exists, but it is not a directory." % role_path)
continue
display.display('# %s' % role_path)
path_files = os.listdir(role_path)
for path_file in path_files:
gr = GalaxyRole(self.galaxy, self.api, path_file, path=path)
if gr.metadata:
_display_role(gr)
# Do not warn if the role was found in any of the search paths
if role_found and role_name:
warnings = []
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError("- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type']))
return 0
def execute_list_collection(self):
"""
List all collections installed on the local system
"""
collections_search_paths = set(context.CLIARGS['collections_path'])
collection_name = context.CLIARGS['collection']
default_collections_path = C.config.get_configuration_definition('COLLECTIONS_PATHS').get('default')
warnings = []
path_found = False
collection_found = False
for path in collections_search_paths:
collection_path = GalaxyCLI._resolve_path(path)
if not os.path.exists(path):
if path in default_collections_path:
# don't warn for missing default paths
continue
warnings.append("- the configured path {0} does not exist.".format(collection_path))
continue
if not os.path.isdir(collection_path):
warnings.append("- the configured path {0}, exists, but it is not a directory.".format(collection_path))
continue
path_found = True
if collection_name:
# list a specific collection
validate_collection_name(collection_name)
namespace, collection = collection_name.split('.')
collection_path = validate_collection_path(collection_path)
b_collection_path = to_bytes(os.path.join(collection_path, namespace, collection), errors='surrogate_or_strict')
if not os.path.exists(b_collection_path):
warnings.append("- unable to find {0} in collection paths".format(collection_name))
continue
if not os.path.isdir(collection_path):
warnings.append("- the configured path {0}, exists, but it is not a directory.".format(collection_path))
continue
collection_found = True
collection = CollectionRequirement.from_path(b_collection_path, False)
fqcn_width, version_width = _get_collection_widths(collection)
_display_header(collection_path, 'Collection', 'Version', fqcn_width, version_width)
_display_collection(collection, fqcn_width, version_width)
else:
# list all collections
collection_path = validate_collection_path(path)
if os.path.isdir(collection_path):
display.vvv("Searching {0} for collections".format(collection_path))
collections = find_existing_collections(collection_path)
else:
# There was no 'ansible_collections/' directory in the path, so there
# or no collections here.
display.vvv("No 'ansible_collections' directory found at {0}".format(collection_path))
continue
if not collections:
display.vvv("No collections found at {0}".format(collection_path))
continue
# Display header
fqcn_width, version_width = _get_collection_widths(collections)
_display_header(collection_path, 'Collection', 'Version', fqcn_width, version_width)
# Sort collections by the namespace and name
collections.sort(key=to_text)
for collection in collections:
_display_collection(collection, fqcn_width, version_width)
# Do not warn if the specific collection was found in any of the search paths
if collection_found and collection_name:
warnings = []
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError("- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type']))
return 0
def execute_publish(self):
"""
Publish a collection into Ansible Galaxy. Requires the path to the collection tarball to publish.
"""
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['args'])
wait = context.CLIARGS['wait']
timeout = context.CLIARGS['import_timeout']
publish_collection(collection_path, self.api, wait, timeout)
def execute_search(self):
''' searches for roles on the Ansible Galaxy server'''
page_size = 1000
search = None
if context.CLIARGS['args']:
search = '+'.join(context.CLIARGS['args'])
if not search and not context.CLIARGS['platforms'] and not context.CLIARGS['galaxy_tags'] and not context.CLIARGS['author']:
raise AnsibleError("Invalid query. At least one search term, platform, galaxy tag or author must be provided.")
response = self.api.search_roles(search, platforms=context.CLIARGS['platforms'],
tags=context.CLIARGS['galaxy_tags'], author=context.CLIARGS['author'], page_size=page_size)
if response['count'] == 0:
display.display("No roles match your search.", color=C.COLOR_ERROR)
return True
data = [u'']
if response['count'] > page_size:
data.append(u"Found %d roles matching your search. Showing first %s." % (response['count'], page_size))
else:
data.append(u"Found %d roles matching your search:" % response['count'])
max_len = []
for role in response['results']:
max_len.append(len(role['username'] + '.' + role['name']))
name_len = max(max_len)
format_str = u" %%-%ds %%s" % name_len
data.append(u'')
data.append(format_str % (u"Name", u"Description"))
data.append(format_str % (u"----", u"-----------"))
for role in response['results']:
data.append(format_str % (u'%s.%s' % (role['username'], role['name']), role['description']))
data = u'\n'.join(data)
self.pager(data)
return True
def execute_login(self):
"""
verify user's identify via Github and retrieve an auth token from Ansible Galaxy.
"""
# Authenticate with github and retrieve a token
if context.CLIARGS['token'] is None:
if C.GALAXY_TOKEN:
github_token = C.GALAXY_TOKEN
else:
login = GalaxyLogin(self.galaxy)
github_token = login.create_github_token()
else:
github_token = context.CLIARGS['token']
galaxy_response = self.api.authenticate(github_token)
if context.CLIARGS['token'] is None and C.GALAXY_TOKEN is None:
# Remove the token we created
login.remove_github_token()
# Store the Galaxy token
token = GalaxyToken()
token.set(galaxy_response['token'])
display.display("Successfully logged into Galaxy as %s" % galaxy_response['username'])
return 0
def execute_import(self):
""" used to import a role into Ansible Galaxy """
colors = {
'INFO': 'normal',
'WARNING': C.COLOR_WARN,
'ERROR': C.COLOR_ERROR,
'SUCCESS': C.COLOR_OK,
'FAILED': C.COLOR_ERROR,
}
github_user = to_text(context.CLIARGS['github_user'], errors='surrogate_or_strict')
github_repo = to_text(context.CLIARGS['github_repo'], errors='surrogate_or_strict')
if context.CLIARGS['check_status']:
task = self.api.get_import_task(github_user=github_user, github_repo=github_repo)
else:
# Submit an import request
task = self.api.create_import_task(github_user, github_repo,
reference=context.CLIARGS['reference'],
role_name=context.CLIARGS['role_name'])
if len(task) > 1:
# found multiple roles associated with github_user/github_repo
display.display("WARNING: More than one Galaxy role associated with Github repo %s/%s." % (github_user, github_repo),
color='yellow')
display.display("The following Galaxy roles are being updated:" + u'\n', color=C.COLOR_CHANGED)
for t in task:
display.display('%s.%s' % (t['summary_fields']['role']['namespace'], t['summary_fields']['role']['name']), color=C.COLOR_CHANGED)
display.display(u'\nTo properly namespace this role, remove each of the above and re-import %s/%s from scratch' % (github_user, github_repo),
color=C.COLOR_CHANGED)
return 0
# found a single role as expected
display.display("Successfully submitted import request %d" % task[0]['id'])
if not context.CLIARGS['wait']:
display.display("Role name: %s" % task[0]['summary_fields']['role']['name'])
display.display("Repo: %s/%s" % (task[0]['github_user'], task[0]['github_repo']))
if context.CLIARGS['check_status'] or context.CLIARGS['wait']:
# Get the status of the import
msg_list = []
finished = False
while not finished:
task = self.api.get_import_task(task_id=task[0]['id'])
for msg in task[0]['summary_fields']['task_messages']:
if msg['id'] not in msg_list:
display.display(msg['message_text'], color=colors[msg['message_type']])
msg_list.append(msg['id'])
if task[0]['state'] in ['SUCCESS', 'FAILED']:
finished = True
else:
time.sleep(10)
return 0
def execute_setup(self):
""" Setup an integration from Github or Travis for Ansible Galaxy roles"""
if context.CLIARGS['setup_list']:
# List existing integration secrets
secrets = self.api.list_secrets()
if len(secrets) == 0:
# None found
display.display("No integrations found.")
return 0
display.display(u'\n' + "ID Source Repo", color=C.COLOR_OK)
display.display("---------- ---------- ----------", color=C.COLOR_OK)
for secret in secrets:
display.display("%-10s %-10s %s/%s" % (secret['id'], secret['source'], secret['github_user'],
secret['github_repo']), color=C.COLOR_OK)
return 0
if context.CLIARGS['remove_id']:
# Remove a secret
self.api.remove_secret(context.CLIARGS['remove_id'])
display.display("Secret removed. Integrations using this secret will not longer work.", color=C.COLOR_OK)
return 0
source = context.CLIARGS['source']
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
secret = context.CLIARGS['secret']
resp = self.api.add_secret(source, github_user, github_repo, secret)
display.display("Added integration for %s %s/%s" % (resp['source'], resp['github_user'], resp['github_repo']))
return 0
def execute_delete(self):
""" Delete a role from Ansible Galaxy. """
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
resp = self.api.delete_role(github_user, github_repo)
if len(resp['deleted_roles']) > 1:
display.display("Deleted the following roles:")
display.display("ID User Name")
display.display("------ --------------- ----------")
for role in resp['deleted_roles']:
display.display("%-8s %-15s %s" % (role.id, role.namespace, role.name))
display.display(resp['status'])
return True
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,490 |
ansible-galaxy collection list should examine galaxy.yml for information
|
##### SUMMARY
The list subcommand throws a lot of warnings about a missing version if the collection was not built and installed via an artifact. This information can often be obtained in the galaxy.yml though, so it would make sense to fallback to that ...
```
(ansible_base) [vagrant@centos8 ~]$ ansible-galaxy collection list
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/azure/azcollection' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cyberark/bizdev' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/google/cloud' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/netbox_community/ansible_modules' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/splunk/enterprise_security' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/ansible/netcommon' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/check_point/mgmt' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/f5networks/f5_modules' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/ibm/qradar' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/openstack/cloud' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/vyos/vyos' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/arista/eos' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/iosxr' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/ucs' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/aci' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/meraki' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/intersight' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/mso' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/ios' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/nxos' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/fortinet/fortios' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/junipernetworks/junos' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/purestorage/flasharray' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/purestorage/flashblade' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/awx/awx' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/community/grafana' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/community/kubernetes' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/community/amazon' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/community/vmware' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/community/general' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/gavinfish/azuretest' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/netapp/aws' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/netapp/elementsw' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/netapp/ontap' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/servicenow/servicenow' does not have a MANIFEST.json file, cannot detect version.
# /home/vagrant/.ansible/collections/ansible_collections
Collection Version
-------------------------------- -------
ansible.netcommon *
arista.eos *
awx.awx *
azure.azcollection *
check_point.mgmt *
cisco.aci *
cisco.intersight *
cisco.ios *
cisco.iosxr *
cisco.meraki *
cisco.mso *
cisco.nxos *
cisco.ucs *
community.amazon *
community.general *
community.grafana *
community.kubernetes *
community.vmware *
cyberark.bizdev *
f5networks.f5_modules *
fortinet.fortios *
gavinfish.azuretest *
google.cloud *
ibm.qradar *
junipernetworks.junos *
netapp.aws *
netapp.elementsw *
netapp.ontap *
netbox_community.ansible_modules *
openstack.cloud *
purestorage.flasharray *
purestorage.flashblade *
servicenow.servicenow *
splunk.enterprise_security *
vyos.vyos *
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy collection list
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
|
https://github.com/ansible/ansible/issues/67490
|
https://github.com/ansible/ansible/pull/68925
|
343ffaa18b63c92e182b16c3ad84b8d81ca4df69
|
55e29a1464fef700671096dd99bcae89e574ff2f
| 2020-02-17T19:27:40Z |
python
| 2020-05-14T16:28:08Z |
lib/ansible/galaxy/collection.py
|
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import fnmatch
import json
import operator
import os
import shutil
import stat
import sys
import tarfile
import tempfile
import threading
import time
import yaml
from collections import namedtuple
from contextlib import contextmanager
from distutils.version import LooseVersion
from hashlib import sha256
from io import BytesIO
from yaml.error import YAMLError
try:
import queue
except ImportError:
import Queue as queue # Python 2
import ansible.constants as C
from ansible.errors import AnsibleError
from ansible.galaxy import get_collections_galaxy_meta_info
from ansible.galaxy.api import CollectionVersionMetadata, GalaxyError
from ansible.galaxy.user_agent import user_agent
from ansible.module_utils import six
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.hashing import secure_hash, secure_hash_s
from ansible.utils.version import SemanticVersion
from ansible.module_utils.urls import open_url
urlparse = six.moves.urllib.parse.urlparse
urllib_error = six.moves.urllib.error
display = Display()
MANIFEST_FORMAT = 1
ModifiedContent = namedtuple('ModifiedContent', ['filename', 'expected', 'installed'])
class CollectionRequirement:
_FILE_MAPPING = [(b'MANIFEST.json', 'manifest_file'), (b'FILES.json', 'files_file')]
def __init__(self, namespace, name, b_path, api, versions, requirement, force, parent=None, metadata=None,
files=None, skip=False, allow_pre_releases=False):
"""
Represents a collection requirement, the versions that are available to be installed as well as any
dependencies the collection has.
:param namespace: The collection namespace.
:param name: The collection name.
:param b_path: Byte str of the path to the collection tarball if it has already been downloaded.
:param api: The GalaxyAPI to use if the collection is from Galaxy.
:param versions: A list of versions of the collection that are available.
:param requirement: The version requirement string used to verify the list of versions fit the requirements.
:param force: Whether the force flag applied to the collection.
:param parent: The name of the parent the collection is a dependency of.
:param metadata: The galaxy.api.CollectionVersionMetadata that has already been retrieved from the Galaxy
server.
:param files: The files that exist inside the collection. This is based on the FILES.json file inside the
collection artifact.
:param skip: Whether to skip installing the collection. Should be set if the collection is already installed
and force is not set.
:param allow_pre_releases: Whether to skip pre-release versions of collections.
"""
self.namespace = namespace
self.name = name
self.b_path = b_path
self.api = api
self._versions = set(versions)
self.force = force
self.skip = skip
self.required_by = []
self.allow_pre_releases = allow_pre_releases
self._metadata = metadata
self._files = files
self.add_requirement(parent, requirement)
def __str__(self):
return to_native("%s.%s" % (self.namespace, self.name))
def __unicode__(self):
return u"%s.%s" % (self.namespace, self.name)
@property
def metadata(self):
self._get_metadata()
return self._metadata
@property
def versions(self):
if self.allow_pre_releases:
return self._versions
return set(v for v in self._versions if v == '*' or not SemanticVersion(v).is_prerelease)
@versions.setter
def versions(self, value):
self._versions = set(value)
@property
def pre_releases(self):
return set(v for v in self._versions if SemanticVersion(v).is_prerelease)
@property
def latest_version(self):
try:
return max([v for v in self.versions if v != '*'], key=SemanticVersion)
except ValueError: # ValueError: max() arg is an empty sequence
return '*'
@property
def dependencies(self):
if not self._metadata:
if len(self.versions) > 1:
return {}
self._get_metadata()
dependencies = self._metadata.dependencies
if dependencies is None:
return {}
return dependencies
def add_requirement(self, parent, requirement):
self.required_by.append((parent, requirement))
new_versions = set(v for v in self.versions if self._meets_requirements(v, requirement, parent))
if len(new_versions) == 0:
if self.skip:
force_flag = '--force-with-deps' if parent else '--force'
version = self.latest_version if self.latest_version != '*' else 'unknown'
msg = "Cannot meet requirement %s:%s as it is already installed at version '%s'. Use %s to overwrite" \
% (to_text(self), requirement, version, force_flag)
raise AnsibleError(msg)
elif parent is None:
msg = "Cannot meet requirement %s for dependency %s" % (requirement, to_text(self))
else:
msg = "Cannot meet dependency requirement '%s:%s' for collection %s" \
% (to_text(self), requirement, parent)
collection_source = to_text(self.b_path, nonstring='passthru') or self.api.api_server
req_by = "\n".join(
"\t%s - '%s:%s'" % (to_text(p) if p else 'base', to_text(self), r)
for p, r in self.required_by
)
versions = ", ".join(sorted(self.versions, key=SemanticVersion))
if not self.versions and self.pre_releases:
pre_release_msg = (
'\nThis collection only contains pre-releases. Utilize `--pre` to install pre-releases, or '
'explicitly provide the pre-release version.'
)
else:
pre_release_msg = ''
raise AnsibleError(
"%s from source '%s'. Available versions before last requirement added: %s\nRequirements from:\n%s%s"
% (msg, collection_source, versions, req_by, pre_release_msg)
)
self.versions = new_versions
def download(self, b_path):
download_url = self._metadata.download_url
artifact_hash = self._metadata.artifact_sha256
headers = {}
self.api._add_auth_token(headers, download_url, required=False)
b_collection_path = _download_file(download_url, b_path, artifact_hash, self.api.validate_certs,
headers=headers)
return to_text(b_collection_path, errors='surrogate_or_strict')
def install(self, path, b_temp_path):
if self.skip:
display.display("Skipping '%s' as it is already installed" % to_text(self))
return
# Install if it is not
collection_path = os.path.join(path, self.namespace, self.name)
b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict')
display.display("Installing '%s:%s' to '%s'" % (to_text(self), self.latest_version, collection_path))
if self.b_path is None:
self.b_path = self.download(b_temp_path)
if os.path.exists(b_collection_path):
shutil.rmtree(b_collection_path)
os.makedirs(b_collection_path)
try:
with tarfile.open(self.b_path, mode='r') as collection_tar:
files_member_obj = collection_tar.getmember('FILES.json')
with _tarfile_extract(collection_tar, files_member_obj) as files_obj:
files = json.loads(to_text(files_obj.read(), errors='surrogate_or_strict'))
_extract_tar_file(collection_tar, 'MANIFEST.json', b_collection_path, b_temp_path)
_extract_tar_file(collection_tar, 'FILES.json', b_collection_path, b_temp_path)
for file_info in files['files']:
file_name = file_info['name']
if file_name == '.':
continue
if file_info['ftype'] == 'file':
_extract_tar_file(collection_tar, file_name, b_collection_path, b_temp_path,
expected_hash=file_info['chksum_sha256'])
else:
os.makedirs(os.path.join(b_collection_path, to_bytes(file_name, errors='surrogate_or_strict')))
except Exception:
# Ensure we don't leave the dir behind in case of a failure.
shutil.rmtree(b_collection_path)
b_namespace_path = os.path.dirname(b_collection_path)
if not os.listdir(b_namespace_path):
os.rmdir(b_namespace_path)
raise
def set_latest_version(self):
self.versions = set([self.latest_version])
self._get_metadata()
def verify(self, remote_collection, path, b_temp_tar_path):
if not self.skip:
display.display("'%s' has not been installed, nothing to verify" % (to_text(self)))
return
collection_path = os.path.join(path, self.namespace, self.name)
b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict')
display.vvv("Verifying '%s:%s'." % (to_text(self), self.latest_version))
display.vvv("Installed collection found at '%s'" % collection_path)
display.vvv("Remote collection found at '%s'" % remote_collection.metadata.download_url)
# Compare installed version versus requirement version
if self.latest_version != remote_collection.latest_version:
err = "%s has the version '%s' but is being compared to '%s'" % (to_text(self), self.latest_version, remote_collection.latest_version)
display.display(err)
return
modified_content = []
# Verify the manifest hash matches before verifying the file manifest
expected_hash = _get_tar_file_hash(b_temp_tar_path, 'MANIFEST.json')
self._verify_file_hash(b_collection_path, 'MANIFEST.json', expected_hash, modified_content)
manifest = _get_json_from_tar_file(b_temp_tar_path, 'MANIFEST.json')
# Use the manifest to verify the file manifest checksum
file_manifest_data = manifest['file_manifest_file']
file_manifest_filename = file_manifest_data['name']
expected_hash = file_manifest_data['chksum_%s' % file_manifest_data['chksum_type']]
# Verify the file manifest before using it to verify individual files
self._verify_file_hash(b_collection_path, file_manifest_filename, expected_hash, modified_content)
file_manifest = _get_json_from_tar_file(b_temp_tar_path, file_manifest_filename)
# Use the file manifest to verify individual file checksums
for manifest_data in file_manifest['files']:
if manifest_data['ftype'] == 'file':
expected_hash = manifest_data['chksum_%s' % manifest_data['chksum_type']]
self._verify_file_hash(b_collection_path, manifest_data['name'], expected_hash, modified_content)
if modified_content:
display.display("Collection %s contains modified content in the following files:" % to_text(self))
display.display(to_text(self))
display.vvv(to_text(self.b_path))
for content_change in modified_content:
display.display(' %s' % content_change.filename)
display.vvv(" Expected: %s\n Found: %s" % (content_change.expected, content_change.installed))
else:
display.vvv("Successfully verified that checksums for '%s:%s' match the remote collection" % (to_text(self), self.latest_version))
def _verify_file_hash(self, b_path, filename, expected_hash, error_queue):
b_file_path = to_bytes(os.path.join(to_text(b_path), filename), errors='surrogate_or_strict')
if not os.path.isfile(b_file_path):
actual_hash = None
else:
with open(b_file_path, mode='rb') as file_object:
actual_hash = _consume_file(file_object)
if expected_hash != actual_hash:
error_queue.append(ModifiedContent(filename=filename, expected=expected_hash, installed=actual_hash))
def _get_metadata(self):
if self._metadata:
return
self._metadata = self.api.get_collection_version_metadata(self.namespace, self.name, self.latest_version)
def _meets_requirements(self, version, requirements, parent):
"""
Supports version identifiers can be '==', '!=', '>', '>=', '<', '<=', '*'. Each requirement is delimited by ','
"""
op_map = {
'!=': operator.ne,
'==': operator.eq,
'=': operator.eq,
'>=': operator.ge,
'>': operator.gt,
'<=': operator.le,
'<': operator.lt,
}
for req in list(requirements.split(',')):
op_pos = 2 if len(req) > 1 and req[1] == '=' else 1
op = op_map.get(req[:op_pos])
requirement = req[op_pos:]
if not op:
requirement = req
op = operator.eq
# In the case we are checking a new requirement on a base requirement (parent != None) we can't accept
# version as '*' (unknown version) unless the requirement is also '*'.
if parent and version == '*' and requirement != '*':
display.warning("Failed to validate the collection requirement '%s:%s' for %s when the existing "
"install does not have a version set, the collection may not work."
% (to_text(self), req, parent))
continue
elif requirement == '*' or version == '*':
continue
if not op(SemanticVersion(version), SemanticVersion.from_loose_version(LooseVersion(requirement))):
break
else:
return True
# The loop was broken early, it does not meet all the requirements
return False
@staticmethod
def from_tar(b_path, force, parent=None):
if not tarfile.is_tarfile(b_path):
raise AnsibleError("Collection artifact at '%s' is not a valid tar file." % to_native(b_path))
info = {}
with tarfile.open(b_path, mode='r') as collection_tar:
for b_member_name, property_name in CollectionRequirement._FILE_MAPPING:
n_member_name = to_native(b_member_name)
try:
member = collection_tar.getmember(n_member_name)
except KeyError:
raise AnsibleError("Collection at '%s' does not contain the required file %s."
% (to_native(b_path), n_member_name))
with _tarfile_extract(collection_tar, member) as member_obj:
try:
info[property_name] = json.loads(to_text(member_obj.read(), errors='surrogate_or_strict'))
except ValueError:
raise AnsibleError("Collection tar file member %s does not contain a valid json string."
% n_member_name)
meta = info['manifest_file']['collection_info']
files = info['files_file']['files']
namespace = meta['namespace']
name = meta['name']
version = meta['version']
meta = CollectionVersionMetadata(namespace, name, version, None, None, meta['dependencies'])
if SemanticVersion(version).is_prerelease:
allow_pre_release = True
else:
allow_pre_release = False
return CollectionRequirement(namespace, name, b_path, None, [version], version, force, parent=parent,
metadata=meta, files=files, allow_pre_releases=allow_pre_release)
@staticmethod
def from_path(b_path, force, parent=None):
info = {}
for b_file_name, property_name in CollectionRequirement._FILE_MAPPING:
b_file_path = os.path.join(b_path, b_file_name)
if not os.path.exists(b_file_path):
continue
with open(b_file_path, 'rb') as file_obj:
try:
info[property_name] = json.loads(to_text(file_obj.read(), errors='surrogate_or_strict'))
except ValueError:
raise AnsibleError("Collection file at '%s' does not contain a valid json string."
% to_native(b_file_path))
allow_pre_release = False
if 'manifest_file' in info:
manifest = info['manifest_file']['collection_info']
namespace = manifest['namespace']
name = manifest['name']
version = to_text(manifest['version'], errors='surrogate_or_strict')
try:
_v = SemanticVersion()
_v.parse(version)
if _v.is_prerelease:
allow_pre_release = True
except ValueError:
display.warning("Collection at '%s' does not have a valid version set, falling back to '*'. Found "
"version: '%s'" % (to_text(b_path), version))
version = '*'
dependencies = manifest['dependencies']
else:
display.warning("Collection at '%s' does not have a MANIFEST.json file, cannot detect version."
% to_text(b_path))
parent_dir, name = os.path.split(to_text(b_path, errors='surrogate_or_strict'))
namespace = os.path.split(parent_dir)[1]
version = '*'
dependencies = {}
meta = CollectionVersionMetadata(namespace, name, version, None, None, dependencies)
files = info.get('files_file', {}).get('files', {})
return CollectionRequirement(namespace, name, b_path, None, [version], version, force, parent=parent,
metadata=meta, files=files, skip=True, allow_pre_releases=allow_pre_release)
@staticmethod
def from_name(collection, apis, requirement, force, parent=None, allow_pre_release=False):
namespace, name = collection.split('.', 1)
galaxy_meta = None
for api in apis:
try:
if not (requirement == '*' or requirement.startswith('<') or requirement.startswith('>') or
requirement.startswith('!=')):
# Exact requirement
allow_pre_release = True
if requirement.startswith('='):
requirement = requirement.lstrip('=')
resp = api.get_collection_version_metadata(namespace, name, requirement)
galaxy_meta = resp
versions = [resp.version]
else:
versions = api.get_collection_versions(namespace, name)
except GalaxyError as err:
if err.http_code == 404:
display.vvv("Collection '%s' is not available from server %s %s"
% (collection, api.name, api.api_server))
continue
raise
display.vvv("Collection '%s' obtained from server %s %s" % (collection, api.name, api.api_server))
break
else:
raise AnsibleError("Failed to find collection %s:%s" % (collection, requirement))
req = CollectionRequirement(namespace, name, None, api, versions, requirement, force, parent=parent,
metadata=galaxy_meta, allow_pre_releases=allow_pre_release)
return req
def build_collection(collection_path, output_path, force):
"""
Creates the Ansible collection artifact in a .tar.gz file.
:param collection_path: The path to the collection to build. This should be the directory that contains the
galaxy.yml file.
:param output_path: The path to create the collection build artifact. This should be a directory.
:param force: Whether to overwrite an existing collection build artifact or fail.
:return: The path to the collection build artifact.
"""
b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict')
b_galaxy_path = os.path.join(b_collection_path, b'galaxy.yml')
if not os.path.exists(b_galaxy_path):
raise AnsibleError("The collection galaxy.yml path '%s' does not exist." % to_native(b_galaxy_path))
collection_meta = _get_galaxy_yml(b_galaxy_path)
file_manifest = _build_files_manifest(b_collection_path, collection_meta['namespace'], collection_meta['name'],
collection_meta['build_ignore'])
collection_manifest = _build_manifest(**collection_meta)
collection_output = os.path.join(output_path, "%s-%s-%s.tar.gz" % (collection_meta['namespace'],
collection_meta['name'],
collection_meta['version']))
b_collection_output = to_bytes(collection_output, errors='surrogate_or_strict')
if os.path.exists(b_collection_output):
if os.path.isdir(b_collection_output):
raise AnsibleError("The output collection artifact '%s' already exists, "
"but is a directory - aborting" % to_native(collection_output))
elif not force:
raise AnsibleError("The file '%s' already exists. You can use --force to re-create "
"the collection artifact." % to_native(collection_output))
_build_collection_tar(b_collection_path, b_collection_output, collection_manifest, file_manifest)
def download_collections(collections, output_path, apis, validate_certs, no_deps, allow_pre_release):
"""
Download Ansible collections as their tarball from a Galaxy server to the path specified and creates a requirements
file of the downloaded requirements to be used for an install.
:param collections: The collections to download, should be a list of tuples with (name, requirement, Galaxy Server).
:param output_path: The path to download the collections to.
:param apis: A list of GalaxyAPIs to query when search for a collection.
:param validate_certs: Whether to validate the certificate if downloading a tarball from a non-Galaxy host.
:param no_deps: Ignore any collection dependencies and only download the base requirements.
:param allow_pre_release: Do not ignore pre-release versions when selecting the latest.
"""
with _tempdir() as b_temp_path:
display.display("Process install dependency map")
with _display_progress():
dep_map = _build_dependency_map(collections, [], b_temp_path, apis, validate_certs, True, True, no_deps,
allow_pre_release=allow_pre_release)
requirements = []
display.display("Starting collection download process to '%s'" % output_path)
with _display_progress():
for name, requirement in dep_map.items():
collection_filename = "%s-%s-%s.tar.gz" % (requirement.namespace, requirement.name,
requirement.latest_version)
dest_path = os.path.join(output_path, collection_filename)
requirements.append({'name': collection_filename, 'version': requirement.latest_version})
display.display("Downloading collection '%s' to '%s'" % (name, dest_path))
b_temp_download_path = requirement.download(b_temp_path)
shutil.move(b_temp_download_path, to_bytes(dest_path, errors='surrogate_or_strict'))
requirements_path = os.path.join(output_path, 'requirements.yml')
display.display("Writing requirements.yml file of downloaded collections to '%s'" % requirements_path)
with open(to_bytes(requirements_path, errors='surrogate_or_strict'), mode='wb') as req_fd:
req_fd.write(to_bytes(yaml.safe_dump({'collections': requirements}), errors='surrogate_or_strict'))
def publish_collection(collection_path, api, wait, timeout):
"""
Publish an Ansible collection tarball into an Ansible Galaxy server.
:param collection_path: The path to the collection tarball to publish.
:param api: A GalaxyAPI to publish the collection to.
:param wait: Whether to wait until the import process is complete.
:param timeout: The time in seconds to wait for the import process to finish, 0 is indefinite.
"""
import_uri = api.publish_collection(collection_path)
if wait:
# Galaxy returns a url fragment which differs between v2 and v3. The second to last entry is
# always the task_id, though.
# v2: {"task": "https://galaxy-dev.ansible.com/api/v2/collection-imports/35573/"}
# v3: {"task": "/api/automation-hub/v3/imports/collections/838d1308-a8f4-402c-95cb-7823f3806cd8/"}
task_id = None
for path_segment in reversed(import_uri.split('/')):
if path_segment:
task_id = path_segment
break
if not task_id:
raise AnsibleError("Publishing the collection did not return valid task info. Cannot wait for task status. Returned task info: '%s'" % import_uri)
display.display("Collection has been published to the Galaxy server %s %s" % (api.name, api.api_server))
with _display_progress():
api.wait_import_task(task_id, timeout)
display.display("Collection has been successfully published and imported to the Galaxy server %s %s"
% (api.name, api.api_server))
else:
display.display("Collection has been pushed to the Galaxy server %s %s, not waiting until import has "
"completed due to --no-wait being set. Import task results can be found at %s"
% (api.name, api.api_server, import_uri))
def install_collections(collections, output_path, apis, validate_certs, ignore_errors, no_deps, force, force_deps,
allow_pre_release=False):
"""
Install Ansible collections to the path specified.
:param collections: The collections to install, should be a list of tuples with (name, requirement, Galaxy server).
:param output_path: The path to install the collections to.
:param apis: A list of GalaxyAPIs to query when searching for a collection.
:param validate_certs: Whether to validate the certificates if downloading a tarball.
:param ignore_errors: Whether to ignore any errors when installing the collection.
:param no_deps: Ignore any collection dependencies and only install the base requirements.
:param force: Re-install a collection if it has already been installed.
:param force_deps: Re-install a collection as well as its dependencies if they have already been installed.
"""
existing_collections = find_existing_collections(output_path)
with _tempdir() as b_temp_path:
display.display("Process install dependency map")
with _display_progress():
dependency_map = _build_dependency_map(collections, existing_collections, b_temp_path, apis,
validate_certs, force, force_deps, no_deps,
allow_pre_release=allow_pre_release)
display.display("Starting collection install process")
with _display_progress():
for collection in dependency_map.values():
try:
collection.install(output_path, b_temp_path)
except AnsibleError as err:
if ignore_errors:
display.warning("Failed to install collection %s but skipping due to --ignore-errors being set. "
"Error: %s" % (to_text(collection), to_text(err)))
else:
raise
def validate_collection_name(name):
"""
Validates the collection name as an input from the user or a requirements file fit the requirements.
:param name: The input name with optional range specifier split by ':'.
:return: The input value, required for argparse validation.
"""
collection, dummy, dummy = name.partition(':')
if AnsibleCollectionRef.is_valid_collection_name(collection):
return name
raise AnsibleError("Invalid collection name '%s', "
"name must be in the format <namespace>.<collection>. \n"
"Please make sure namespace and collection name contains "
"characters from [a-zA-Z0-9_] only." % name)
def validate_collection_path(collection_path):
""" Ensure a given path ends with 'ansible_collections'
:param collection_path: The path that should end in 'ansible_collections'
:return: collection_path ending in 'ansible_collections' if it does not already.
"""
if os.path.split(collection_path)[1] != 'ansible_collections':
return os.path.join(collection_path, 'ansible_collections')
return collection_path
def verify_collections(collections, search_paths, apis, validate_certs, ignore_errors, allow_pre_release=False):
with _display_progress():
with _tempdir() as b_temp_path:
for collection in collections:
try:
local_collection = None
b_collection = to_bytes(collection[0], errors='surrogate_or_strict')
if os.path.isfile(b_collection) or urlparse(collection[0]).scheme.lower() in ['http', 'https'] or len(collection[0].split('.')) != 2:
raise AnsibleError(message="'%s' is not a valid collection name. The format namespace.name is expected." % collection[0])
collection_name = collection[0]
namespace, name = collection_name.split('.')
collection_version = collection[1]
# Verify local collection exists before downloading it from a galaxy server
for search_path in search_paths:
b_search_path = to_bytes(os.path.join(search_path, namespace, name), errors='surrogate_or_strict')
if os.path.isdir(b_search_path):
if not os.path.isfile(os.path.join(to_text(b_search_path, errors='surrogate_or_strict'), 'MANIFEST.json')):
raise AnsibleError(
message="Collection %s does not appear to have a MANIFEST.json. " % collection_name +
"A MANIFEST.json is expected if the collection has been built and installed via ansible-galaxy."
)
local_collection = CollectionRequirement.from_path(b_search_path, False)
break
if local_collection is None:
raise AnsibleError(message='Collection %s is not installed in any of the collection paths.' % collection_name)
# Download collection on a galaxy server for comparison
try:
remote_collection = CollectionRequirement.from_name(collection_name, apis, collection_version, False, parent=None,
allow_pre_release=allow_pre_release)
except AnsibleError as e:
if e.message == 'Failed to find collection %s:%s' % (collection[0], collection[1]):
raise AnsibleError('Failed to find remote collection %s:%s on any of the galaxy servers' % (collection[0], collection[1]))
raise
download_url = remote_collection.metadata.download_url
headers = {}
remote_collection.api._add_auth_token(headers, download_url, required=False)
b_temp_tar_path = _download_file(download_url, b_temp_path, None, validate_certs, headers=headers)
local_collection.verify(remote_collection, search_path, b_temp_tar_path)
except AnsibleError as err:
if ignore_errors:
display.warning("Failed to verify collection %s but skipping due to --ignore-errors being set. "
"Error: %s" % (collection[0], to_text(err)))
else:
raise
@contextmanager
def _tempdir():
b_temp_path = tempfile.mkdtemp(dir=to_bytes(C.DEFAULT_LOCAL_TMP, errors='surrogate_or_strict'))
yield b_temp_path
shutil.rmtree(b_temp_path)
@contextmanager
def _tarfile_extract(tar, member):
tar_obj = tar.extractfile(member)
yield tar_obj
tar_obj.close()
@contextmanager
def _display_progress():
config_display = C.GALAXY_DISPLAY_PROGRESS
display_wheel = sys.stdout.isatty() if config_display is None else config_display
if not display_wheel:
yield
return
def progress(display_queue, actual_display):
actual_display.debug("Starting display_progress display thread")
t = threading.current_thread()
while True:
for c in "|/-\\":
actual_display.display(c + "\b", newline=False)
time.sleep(0.1)
# Display a message from the main thread
while True:
try:
method, args, kwargs = display_queue.get(block=False, timeout=0.1)
except queue.Empty:
break
else:
func = getattr(actual_display, method)
func(*args, **kwargs)
if getattr(t, "finish", False):
actual_display.debug("Received end signal for display_progress display thread")
return
class DisplayThread(object):
def __init__(self, display_queue):
self.display_queue = display_queue
def __getattr__(self, attr):
def call_display(*args, **kwargs):
self.display_queue.put((attr, args, kwargs))
return call_display
# Temporary override the global display class with our own which add the calls to a queue for the thread to call.
global display
old_display = display
try:
display_queue = queue.Queue()
display = DisplayThread(display_queue)
t = threading.Thread(target=progress, args=(display_queue, old_display))
t.daemon = True
t.start()
try:
yield
finally:
t.finish = True
t.join()
except Exception:
# The exception is re-raised so we can sure the thread is finished and not using the display anymore
raise
finally:
display = old_display
def _get_galaxy_yml(b_galaxy_yml_path):
meta_info = get_collections_galaxy_meta_info()
mandatory_keys = set()
string_keys = set()
list_keys = set()
dict_keys = set()
for info in meta_info:
if info.get('required', False):
mandatory_keys.add(info['key'])
key_list_type = {
'str': string_keys,
'list': list_keys,
'dict': dict_keys,
}[info.get('type', 'str')]
key_list_type.add(info['key'])
all_keys = frozenset(list(mandatory_keys) + list(string_keys) + list(list_keys) + list(dict_keys))
try:
with open(b_galaxy_yml_path, 'rb') as g_yaml:
galaxy_yml = yaml.safe_load(g_yaml)
except YAMLError as err:
raise AnsibleError("Failed to parse the galaxy.yml at '%s' with the following error:\n%s"
% (to_native(b_galaxy_yml_path), to_native(err)))
set_keys = set(galaxy_yml.keys())
missing_keys = mandatory_keys.difference(set_keys)
if missing_keys:
raise AnsibleError("The collection galaxy.yml at '%s' is missing the following mandatory keys: %s"
% (to_native(b_galaxy_yml_path), ", ".join(sorted(missing_keys))))
extra_keys = set_keys.difference(all_keys)
if len(extra_keys) > 0:
display.warning("Found unknown keys in collection galaxy.yml at '%s': %s"
% (to_text(b_galaxy_yml_path), ", ".join(extra_keys)))
# Add the defaults if they have not been set
for optional_string in string_keys:
if optional_string not in galaxy_yml:
galaxy_yml[optional_string] = None
for optional_list in list_keys:
list_val = galaxy_yml.get(optional_list, None)
if list_val is None:
galaxy_yml[optional_list] = []
elif not isinstance(list_val, list):
galaxy_yml[optional_list] = [list_val]
for optional_dict in dict_keys:
if optional_dict not in galaxy_yml:
galaxy_yml[optional_dict] = {}
# license is a builtin var in Python, to avoid confusion we just rename it to license_ids
galaxy_yml['license_ids'] = galaxy_yml['license']
del galaxy_yml['license']
return galaxy_yml
def _build_files_manifest(b_collection_path, namespace, name, ignore_patterns):
# We always ignore .pyc and .retry files as well as some well known version control directories. The ignore
# patterns can be extended by the build_ignore key in galaxy.yml
b_ignore_patterns = [
b'galaxy.yml',
b'.git',
b'*.pyc',
b'*.retry',
b'tests/output', # Ignore ansible-test result output directory.
to_bytes('{0}-{1}-*.tar.gz'.format(namespace, name)), # Ignores previously built artifacts in the root dir.
]
b_ignore_patterns += [to_bytes(p) for p in ignore_patterns]
b_ignore_dirs = frozenset([b'CVS', b'.bzr', b'.hg', b'.git', b'.svn', b'__pycache__', b'.tox'])
entry_template = {
'name': None,
'ftype': None,
'chksum_type': None,
'chksum_sha256': None,
'format': MANIFEST_FORMAT
}
manifest = {
'files': [
{
'name': '.',
'ftype': 'dir',
'chksum_type': None,
'chksum_sha256': None,
'format': MANIFEST_FORMAT,
},
],
'format': MANIFEST_FORMAT,
}
def _walk(b_path, b_top_level_dir):
for b_item in os.listdir(b_path):
b_abs_path = os.path.join(b_path, b_item)
b_rel_base_dir = b'' if b_path == b_top_level_dir else b_path[len(b_top_level_dir) + 1:]
b_rel_path = os.path.join(b_rel_base_dir, b_item)
rel_path = to_text(b_rel_path, errors='surrogate_or_strict')
if os.path.isdir(b_abs_path):
if any(b_item == b_path for b_path in b_ignore_dirs) or \
any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns):
display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path))
continue
if os.path.islink(b_abs_path):
b_link_target = os.path.realpath(b_abs_path)
if not b_link_target.startswith(b_top_level_dir):
display.warning("Skipping '%s' as it is a symbolic link to a directory outside the collection"
% to_text(b_abs_path))
continue
manifest_entry = entry_template.copy()
manifest_entry['name'] = rel_path
manifest_entry['ftype'] = 'dir'
manifest['files'].append(manifest_entry)
_walk(b_abs_path, b_top_level_dir)
else:
if any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns):
display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path))
continue
manifest_entry = entry_template.copy()
manifest_entry['name'] = rel_path
manifest_entry['ftype'] = 'file'
manifest_entry['chksum_type'] = 'sha256'
manifest_entry['chksum_sha256'] = secure_hash(b_abs_path, hash_func=sha256)
manifest['files'].append(manifest_entry)
_walk(b_collection_path, b_collection_path)
return manifest
def _build_manifest(namespace, name, version, authors, readme, tags, description, license_ids, license_file,
dependencies, repository, documentation, homepage, issues, **kwargs):
manifest = {
'collection_info': {
'namespace': namespace,
'name': name,
'version': version,
'authors': authors,
'readme': readme,
'tags': tags,
'description': description,
'license': license_ids,
'license_file': license_file if license_file else None, # Handle galaxy.yml having an empty string (None)
'dependencies': dependencies,
'repository': repository,
'documentation': documentation,
'homepage': homepage,
'issues': issues,
},
'file_manifest_file': {
'name': 'FILES.json',
'ftype': 'file',
'chksum_type': 'sha256',
'chksum_sha256': None, # Filled out in _build_collection_tar
'format': MANIFEST_FORMAT
},
'format': MANIFEST_FORMAT,
}
return manifest
def _build_collection_tar(b_collection_path, b_tar_path, collection_manifest, file_manifest):
files_manifest_json = to_bytes(json.dumps(file_manifest, indent=True), errors='surrogate_or_strict')
collection_manifest['file_manifest_file']['chksum_sha256'] = secure_hash_s(files_manifest_json, hash_func=sha256)
collection_manifest_json = to_bytes(json.dumps(collection_manifest, indent=True), errors='surrogate_or_strict')
with _tempdir() as b_temp_path:
b_tar_filepath = os.path.join(b_temp_path, os.path.basename(b_tar_path))
with tarfile.open(b_tar_filepath, mode='w:gz') as tar_file:
# Add the MANIFEST.json and FILES.json file to the archive
for name, b in [('MANIFEST.json', collection_manifest_json), ('FILES.json', files_manifest_json)]:
b_io = BytesIO(b)
tar_info = tarfile.TarInfo(name)
tar_info.size = len(b)
tar_info.mtime = time.time()
tar_info.mode = 0o0644
tar_file.addfile(tarinfo=tar_info, fileobj=b_io)
for file_info in file_manifest['files']:
if file_info['name'] == '.':
continue
# arcname expects a native string, cannot be bytes
filename = to_native(file_info['name'], errors='surrogate_or_strict')
b_src_path = os.path.join(b_collection_path, to_bytes(filename, errors='surrogate_or_strict'))
def reset_stat(tarinfo):
existing_is_exec = tarinfo.mode & stat.S_IXUSR
tarinfo.mode = 0o0755 if existing_is_exec or tarinfo.isdir() else 0o0644
tarinfo.uid = tarinfo.gid = 0
tarinfo.uname = tarinfo.gname = ''
return tarinfo
tar_file.add(os.path.realpath(b_src_path), arcname=filename, recursive=False, filter=reset_stat)
shutil.copy(b_tar_filepath, b_tar_path)
collection_name = "%s.%s" % (collection_manifest['collection_info']['namespace'],
collection_manifest['collection_info']['name'])
display.display('Created collection for %s at %s' % (collection_name, to_text(b_tar_path)))
def find_existing_collections(path):
collections = []
b_path = to_bytes(path, errors='surrogate_or_strict')
for b_namespace in os.listdir(b_path):
b_namespace_path = os.path.join(b_path, b_namespace)
if os.path.isfile(b_namespace_path):
continue
for b_collection in os.listdir(b_namespace_path):
b_collection_path = os.path.join(b_namespace_path, b_collection)
if os.path.isdir(b_collection_path):
req = CollectionRequirement.from_path(b_collection_path, False)
display.vvv("Found installed collection %s:%s at '%s'" % (to_text(req), req.latest_version,
to_text(b_collection_path)))
collections.append(req)
return collections
def _build_dependency_map(collections, existing_collections, b_temp_path, apis, validate_certs, force, force_deps,
no_deps, allow_pre_release=False):
dependency_map = {}
# First build the dependency map on the actual requirements
for name, version, source in collections:
_get_collection_info(dependency_map, existing_collections, name, version, source, b_temp_path, apis,
validate_certs, (force or force_deps), allow_pre_release=allow_pre_release)
checked_parents = set([to_text(c) for c in dependency_map.values() if c.skip])
while len(dependency_map) != len(checked_parents):
while not no_deps: # Only parse dependencies if no_deps was not set
parents_to_check = set(dependency_map.keys()).difference(checked_parents)
deps_exhausted = True
for parent in parents_to_check:
parent_info = dependency_map[parent]
if parent_info.dependencies:
deps_exhausted = False
for dep_name, dep_requirement in parent_info.dependencies.items():
_get_collection_info(dependency_map, existing_collections, dep_name, dep_requirement,
parent_info.api, b_temp_path, apis, validate_certs, force_deps,
parent=parent, allow_pre_release=allow_pre_release)
checked_parents.add(parent)
# No extra dependencies were resolved, exit loop
if deps_exhausted:
break
# Now we have resolved the deps to our best extent, now select the latest version for collections with
# multiple versions found and go from there
deps_not_checked = set(dependency_map.keys()).difference(checked_parents)
for collection in deps_not_checked:
dependency_map[collection].set_latest_version()
if no_deps or len(dependency_map[collection].dependencies) == 0:
checked_parents.add(collection)
return dependency_map
def _get_collection_info(dep_map, existing_collections, collection, requirement, source, b_temp_path, apis,
validate_certs, force, parent=None, allow_pre_release=False):
dep_msg = ""
if parent:
dep_msg = " - as dependency of %s" % parent
display.vvv("Processing requirement collection '%s'%s" % (to_text(collection), dep_msg))
b_tar_path = None
if os.path.isfile(to_bytes(collection, errors='surrogate_or_strict')):
display.vvvv("Collection requirement '%s' is a tar artifact" % to_text(collection))
b_tar_path = to_bytes(collection, errors='surrogate_or_strict')
elif urlparse(collection).scheme.lower() in ['http', 'https']:
display.vvvv("Collection requirement '%s' is a URL to a tar artifact" % collection)
try:
b_tar_path = _download_file(collection, b_temp_path, None, validate_certs)
except urllib_error.URLError as err:
raise AnsibleError("Failed to download collection tar from '%s': %s"
% (to_native(collection), to_native(err)))
if b_tar_path:
req = CollectionRequirement.from_tar(b_tar_path, force, parent=parent)
collection_name = to_text(req)
if collection_name in dep_map:
collection_info = dep_map[collection_name]
collection_info.add_requirement(None, req.latest_version)
else:
collection_info = req
else:
validate_collection_name(collection)
display.vvvv("Collection requirement '%s' is the name of a collection" % collection)
if collection in dep_map:
collection_info = dep_map[collection]
collection_info.add_requirement(parent, requirement)
else:
apis = [source] if source else apis
collection_info = CollectionRequirement.from_name(collection, apis, requirement, force, parent=parent,
allow_pre_release=allow_pre_release)
existing = [c for c in existing_collections if to_text(c) == to_text(collection_info)]
if existing and not collection_info.force:
# Test that the installed collection fits the requirement
existing[0].add_requirement(parent, requirement)
collection_info = existing[0]
dep_map[to_text(collection_info)] = collection_info
def _download_file(url, b_path, expected_hash, validate_certs, headers=None):
urlsplit = os.path.splitext(to_text(url.rsplit('/', 1)[1]))
b_file_name = to_bytes(urlsplit[0], errors='surrogate_or_strict')
b_file_ext = to_bytes(urlsplit[1], errors='surrogate_or_strict')
b_file_path = tempfile.NamedTemporaryFile(dir=b_path, prefix=b_file_name, suffix=b_file_ext, delete=False).name
display.vvv("Downloading %s to %s" % (url, to_text(b_path)))
# Galaxy redirs downloads to S3 which reject the request if an Authorization header is attached so don't redir that
resp = open_url(to_native(url, errors='surrogate_or_strict'), validate_certs=validate_certs, headers=headers,
unredirected_headers=['Authorization'], http_agent=user_agent())
with open(b_file_path, 'wb') as download_file:
actual_hash = _consume_file(resp, download_file)
if expected_hash:
display.vvvv("Validating downloaded file hash %s with expected hash %s" % (actual_hash, expected_hash))
if expected_hash != actual_hash:
raise AnsibleError("Mismatch artifact hash with downloaded file")
return b_file_path
def _extract_tar_file(tar, filename, b_dest, b_temp_path, expected_hash=None):
with _get_tar_file_member(tar, filename) as tar_obj:
with tempfile.NamedTemporaryFile(dir=b_temp_path, delete=False) as tmpfile_obj:
actual_hash = _consume_file(tar_obj, tmpfile_obj)
if expected_hash and actual_hash != expected_hash:
raise AnsibleError("Checksum mismatch for '%s' inside collection at '%s'"
% (to_native(filename, errors='surrogate_or_strict'), to_native(tar.name)))
b_dest_filepath = os.path.abspath(os.path.join(b_dest, to_bytes(filename, errors='surrogate_or_strict')))
b_parent_dir = os.path.dirname(b_dest_filepath)
if b_parent_dir != b_dest and not b_parent_dir.startswith(b_dest + to_bytes(os.path.sep)):
raise AnsibleError("Cannot extract tar entry '%s' as it will be placed outside the collection directory"
% to_native(filename, errors='surrogate_or_strict'))
if not os.path.exists(b_parent_dir):
# Seems like Galaxy does not validate if all file entries have a corresponding dir ftype entry. This check
# makes sure we create the parent directory even if it wasn't set in the metadata.
os.makedirs(b_parent_dir, mode=0o0755)
shutil.move(to_bytes(tmpfile_obj.name, errors='surrogate_or_strict'), b_dest_filepath)
# Default to rw-r--r-- and only add execute if the tar file has execute.
tar_member = tar.getmember(to_native(filename, errors='surrogate_or_strict'))
new_mode = 0o644
if stat.S_IMODE(tar_member.mode) & stat.S_IXUSR:
new_mode |= 0o0111
os.chmod(b_dest_filepath, new_mode)
def _get_tar_file_member(tar, filename):
n_filename = to_native(filename, errors='surrogate_or_strict')
try:
member = tar.getmember(n_filename)
except KeyError:
raise AnsibleError("Collection tar at '%s' does not contain the expected file '%s'." % (
to_native(tar.name),
n_filename))
return _tarfile_extract(tar, member)
def _get_json_from_tar_file(b_path, filename):
file_contents = ''
with tarfile.open(b_path, mode='r') as collection_tar:
with _get_tar_file_member(collection_tar, filename) as tar_obj:
bufsize = 65536
data = tar_obj.read(bufsize)
while data:
file_contents += to_text(data)
data = tar_obj.read(bufsize)
return json.loads(file_contents)
def _get_tar_file_hash(b_path, filename):
with tarfile.open(b_path, mode='r') as collection_tar:
with _get_tar_file_member(collection_tar, filename) as tar_obj:
return _consume_file(tar_obj)
def _consume_file(read_from, write_to=None):
bufsize = 65536
sha256_digest = sha256()
data = read_from.read(bufsize)
while data:
if write_to is not None:
write_to.write(data)
write_to.flush()
sha256_digest.update(data)
data = read_from.read(bufsize)
return sha256_digest.hexdigest()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,490 |
ansible-galaxy collection list should examine galaxy.yml for information
|
##### SUMMARY
The list subcommand throws a lot of warnings about a missing version if the collection was not built and installed via an artifact. This information can often be obtained in the galaxy.yml though, so it would make sense to fallback to that ...
```
(ansible_base) [vagrant@centos8 ~]$ ansible-galaxy collection list
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/azure/azcollection' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cyberark/bizdev' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/google/cloud' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/netbox_community/ansible_modules' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/splunk/enterprise_security' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/ansible/netcommon' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/check_point/mgmt' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/f5networks/f5_modules' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/ibm/qradar' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/openstack/cloud' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/vyos/vyos' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/arista/eos' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/iosxr' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/ucs' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/aci' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/meraki' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/intersight' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/mso' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/ios' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/cisco/nxos' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/fortinet/fortios' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/junipernetworks/junos' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/purestorage/flasharray' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/purestorage/flashblade' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/awx/awx' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/community/grafana' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/community/kubernetes' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/community/amazon' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/community/vmware' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/community/general' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/gavinfish/azuretest' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/netapp/aws' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/netapp/elementsw' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/netapp/ontap' does not have a MANIFEST.json file, cannot detect version.
[WARNING]: Collection at '/home/vagrant/.ansible/collections/ansible_collections/servicenow/servicenow' does not have a MANIFEST.json file, cannot detect version.
# /home/vagrant/.ansible/collections/ansible_collections
Collection Version
-------------------------------- -------
ansible.netcommon *
arista.eos *
awx.awx *
azure.azcollection *
check_point.mgmt *
cisco.aci *
cisco.intersight *
cisco.ios *
cisco.iosxr *
cisco.meraki *
cisco.mso *
cisco.nxos *
cisco.ucs *
community.amazon *
community.general *
community.grafana *
community.kubernetes *
community.vmware *
cyberark.bizdev *
f5networks.f5_modules *
fortinet.fortios *
gavinfish.azuretest *
google.cloud *
ibm.qradar *
junipernetworks.junos *
netapp.aws *
netapp.elementsw *
netapp.ontap *
netbox_community.ansible_modules *
openstack.cloud *
purestorage.flasharray *
purestorage.flashblade *
servicenow.servicenow *
splunk.enterprise_security *
vyos.vyos *
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy collection list
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
|
https://github.com/ansible/ansible/issues/67490
|
https://github.com/ansible/ansible/pull/68925
|
343ffaa18b63c92e182b16c3ad84b8d81ca4df69
|
55e29a1464fef700671096dd99bcae89e574ff2f
| 2020-02-17T19:27:40Z |
python
| 2020-05-14T16:28:08Z |
test/integration/targets/ansible-galaxy/runme.sh
|
#!/usr/bin/env bash
set -eux -o pipefail
ansible-playbook setup.yml "$@"
trap 'ansible-playbook ${ANSIBLE_PLAYBOOK_DIR}/cleanup.yml' EXIT
# Very simple version test
ansible-galaxy --version
# Need a relative custom roles path for testing various scenarios of -p
galaxy_relative_rolespath="my/custom/roles/path"
# Status message function (f_ to designate that it's a function)
f_ansible_galaxy_status()
{
printf "\n\n\n### Testing ansible-galaxy: %s\n" "${@}"
}
# Use to initialize a repository. Must call the post function too.
f_ansible_galaxy_create_role_repo_pre()
{
repo_name=$1
repo_dir=$2
pushd "${repo_dir}"
ansible-galaxy init "${repo_name}"
pushd "${repo_name}"
git init .
# Prep git, because it doesn't work inside a docker container without it
git config user.email "[email protected]"
git config user.name "Ansible Tester"
# f_ansible_galaxy_create_role_repo_post
}
# Call after f_ansible_galaxy_create_repo_pre.
f_ansible_galaxy_create_role_repo_post()
{
repo_name=$1
repo_tar=$2
# f_ansible_galaxy_create_role_repo_pre
git add .
git commit -m "local testing ansible galaxy role"
git archive \
--format=tar \
--prefix="${repo_name}/" \
master > "${repo_tar}"
popd # "${repo_name}"
popd # "${repo_dir}"
}
# Prep the local git repos with role and make a tar archive so we can test
# different things
galaxy_local_test_role="test-role"
galaxy_local_test_role_dir=$(mktemp -d)
galaxy_local_test_role_git_repo="${galaxy_local_test_role_dir}/${galaxy_local_test_role}"
galaxy_local_test_role_tar="${galaxy_local_test_role_dir}/${galaxy_local_test_role}.tar"
f_ansible_galaxy_create_role_repo_pre "${galaxy_local_test_role}" "${galaxy_local_test_role_dir}"
f_ansible_galaxy_create_role_repo_post "${galaxy_local_test_role}" "${galaxy_local_test_role_tar}"
galaxy_local_parent_role="parent-role"
galaxy_local_parent_role_dir=$(mktemp -d)
galaxy_local_parent_role_git_repo="${galaxy_local_parent_role_dir}/${galaxy_local_parent_role}"
galaxy_local_parent_role_tar="${galaxy_local_parent_role_dir}/${galaxy_local_parent_role}.tar"
# Create parent-role repository
f_ansible_galaxy_create_role_repo_pre "${galaxy_local_parent_role}" "${galaxy_local_parent_role_dir}"
cat <<EOF > meta/requirements.yml
- src: git+file:///${galaxy_local_test_role_git_repo}
EOF
f_ansible_galaxy_create_role_repo_post "${galaxy_local_parent_role}" "${galaxy_local_parent_role_tar}"
# Galaxy install test case
#
# Install local git repo
f_ansible_galaxy_status "install of local git repo"
galaxy_testdir=$(mktemp -d)
pushd "${galaxy_testdir}"
ansible-galaxy install git+file:///"${galaxy_local_test_role_git_repo}" "$@"
# Test that the role was installed to the expected directory
[[ -d "${HOME}/.ansible/roles/${galaxy_local_test_role}" ]]
popd # ${galaxy_testdir}
rm -fr "${galaxy_testdir}"
rm -fr "${HOME}/.ansible/roles/${galaxy_local_test_role}"
# Galaxy install test case
#
# Install local git repo and ensure that if a role_path is passed, it is in fact used
f_ansible_galaxy_status "install of local git repo with -p \$role_path"
galaxy_testdir=$(mktemp -d)
pushd "${galaxy_testdir}"
mkdir -p "${galaxy_relative_rolespath}"
ansible-galaxy install git+file:///"${galaxy_local_test_role_git_repo}" -p "${galaxy_relative_rolespath}" "$@"
# Test that the role was installed to the expected directory
[[ -d "${galaxy_relative_rolespath}/${galaxy_local_test_role}" ]]
popd # ${galaxy_testdir}
rm -fr "${galaxy_testdir}"
# Galaxy install test case
#
# Install local git repo with a meta/requirements.yml
f_ansible_galaxy_status "install of local git repo with meta/requirements.yml"
galaxy_testdir=$(mktemp -d)
pushd "${galaxy_testdir}"
ansible-galaxy install git+file:///"${galaxy_local_parent_role_git_repo}" "$@"
# Test that the role was installed to the expected directory
[[ -d "${HOME}/.ansible/roles/${galaxy_local_parent_role}" ]]
# Test that the dependency was also installed
[[ -d "${HOME}/.ansible/roles/${galaxy_local_test_role}" ]]
popd # ${galaxy_testdir}
rm -fr "${galaxy_testdir}"
rm -fr "${HOME}/.ansible/roles/${galaxy_local_parent_role}"
rm -fr "${HOME}/.ansible/roles/${galaxy_local_test_role}"
# Galaxy install test case
#
# Install local git repo with a meta/requirements.yml + --no-deps argument
f_ansible_galaxy_status "install of local git repo with meta/requirements.yml + --no-deps argument"
galaxy_testdir=$(mktemp -d)
pushd "${galaxy_testdir}"
ansible-galaxy install git+file:///"${galaxy_local_parent_role_git_repo}" --no-deps "$@"
# Test that the role was installed to the expected directory
[[ -d "${HOME}/.ansible/roles/${galaxy_local_parent_role}" ]]
# Test that the dependency was not installed
[[ ! -d "${HOME}/.ansible/roles/${galaxy_local_test_role}" ]]
popd # ${galaxy_testdir}
rm -fr "${galaxy_testdir}"
rm -fr "${HOME}/.ansible/roles/${galaxy_local_test_role}"
# Galaxy install test case
#
# Ensure that if both a role_file and role_path is provided, they are both
# honored
#
# Protect against regression (GitHub Issue #35217)
# https://github.com/ansible/ansible/issues/35217
f_ansible_galaxy_status \
"install of local git repo and local tarball with -p \$role_path and -r \$role_file" \
"Protect against regression (Issue #35217)"
galaxy_testdir=$(mktemp -d)
pushd "${galaxy_testdir}"
git clone "${galaxy_local_test_role_git_repo}" "${galaxy_local_test_role}"
ansible-galaxy init roles-path-bug "$@"
pushd roles-path-bug
cat <<EOF > ansible.cfg
[defaults]
roles_path = ../:../../:../roles:roles/
EOF
cat <<EOF > requirements.yml
---
- src: ${galaxy_local_test_role_tar}
name: ${galaxy_local_test_role}
EOF
ansible-galaxy install -r requirements.yml -p roles/ "$@"
popd # roles-path-bug
# Test that the role was installed to the expected directory
[[ -d "${galaxy_testdir}/roles-path-bug/roles/${galaxy_local_test_role}" ]]
popd # ${galaxy_testdir}
rm -fr "${galaxy_testdir}"
# Galaxy role list tests
#
# Basic tests to ensure listing roles works
f_ansible_galaxy_status "role list"
galaxy_testdir=$(mktemp -d)
pushd "${galaxy_testdir}"
ansible-galaxy install git+file:///"${galaxy_local_test_role_git_repo}" "$@"
ansible-galaxy role list | tee out.txt
ansible-galaxy role list test-role | tee -a out.txt
[[ $(grep -c '^- test-role' out.txt ) -eq 2 ]]
popd # ${galaxy_testdir}
# Galaxy role test case
#
# Test listing a specific role that is not in the first path in ANSIBLE_ROLES_PATH.
# https://github.com/ansible/ansible/issues/60167#issuecomment-585460706
f_ansible_galaxy_status \
"list specific role not in the first path in ANSIBLE_ROLES_PATHS"
role_testdir=$(mktemp -d)
pushd "${role_testdir}"
mkdir testroles
ansible-galaxy role init --init-path ./local-roles quark
ANSIBLE_ROLES_PATH=./local-roles:${HOME}/.ansible/roles ansible-galaxy role list quark | tee out.txt
[[ $(grep -c 'not found' out.txt) -eq 0 ]]
ANSIBLE_ROLES_PATH=${HOME}/.ansible/roles:./local-roles ansible-galaxy role list quark | tee out.txt
[[ $(grep -c 'not found' out.txt) -eq 0 ]]
popd # ${role_testdir}
rm -fr "${role_testdir}"
# Galaxy role info tests
f_ansible_galaxy_status \
"role info non-existant role"
role_testdir=$(mktemp -d)
pushd "${role_testdir}"
ansible-galaxy role info notaroll | tee out.txt
grep -- '- the role notaroll was not found' out.txt
f_ansible_galaxy_status \
"role info description offline"
mkdir testroles
ansible-galaxy role init testdesc --init-path ./testroles
# Only galaxy_info['description'] exists in file
sed -i -e 's#[[:space:]]\{1,\}description:.*$# description: Description in galaxy_info#' ./testroles/testdesc/meta/main.yml
ansible-galaxy role info -p ./testroles --offline testdesc | tee out.txt
grep 'description: Description in galaxy_info' out.txt
# Both top level 'description' and galaxy_info['description'] exist in file
# Use shell-fu instead of sed to prepend a line to a file because BSD
# and macOS sed don't work the same as GNU sed.
echo 'description: Top level' | \
cat - ./testroles/testdesc/meta/main.yml > tmp.yml && \
mv tmp.yml ./testroles/testdesc/meta/main.yml
ansible-galaxy role info -p ./testroles --offline testdesc | tee out.txt
grep 'description: Top level' out.txt
# Only top level 'description' exists in file
sed -i.bak '/^[[:space:]]\{1,\}description: Description in galaxy_info/d' ./testroles/testdesc/meta/main.yml
ansible-galaxy role info -p ./testroles --offline testdesc | tee out.txt
grep 'description: Top level' out.txt
popd # ${role_testdir}
rm -fr "${role_testdir}"
# Properly list roles when the role name is a subset of the path, or the role
# name is the same name as the parent directory of the role. Issue #67365
#
# ./parrot/parrot
# ./parrot/arr
# ./testing-roles/test
f_ansible_galaxy_status \
"list roles where the role name is the same or a subset of the role path (#67365)"
role_testdir=$(mktemp -d)
pushd "${role_testdir}"
mkdir parrot
ansible-galaxy role init --init-path ./parrot parrot
ansible-galaxy role init --init-path ./parrot parrot-ship
ansible-galaxy role init --init-path ./parrot arr
ansible-galaxy role list -p ./parrot | tee out.txt
[[ $(grep -Ec '\- (parrot|arr)' out.txt) -eq 3 ]]
ansible-galaxy role list test-role | tee -a out.txt
popd # ${role_testdir}
rm -rf "${role_testdir}"
f_ansible_galaxy_status \
"Test role with non-ascii characters"
role_testdir=$(mktemp -d)
pushd "${role_testdir}"
mkdir nonascii
ansible-galaxy role init --init-path ./nonascii nonascii
touch nonascii/ÅÑŚÌβŁÈ.txt
tar czvf nonascii.tar.gz nonascii
ansible-galaxy role install -p ./roles nonascii.tar.gz
popd # ${role_testdir}
rm -rf "${role_testdir}"
#################################
# ansible-galaxy collection tests
#################################
# TODO: Move these to ansible-galaxy-collection
galaxy_testdir=$(mktemp -d)
pushd "${galaxy_testdir}"
## ansible-galaxy collection list tests
# Create more collections and put them in various places
f_ansible_galaxy_status \
"setting up for collection list tests"
rm -rf ansible_test/* install/*
NAMES=(zoo museum airport)
for n in "${NAMES[@]}"; do
ansible-galaxy collection init "ansible_test.$n"
ansible-galaxy collection build "ansible_test/$n"
done
ansible-galaxy collection install ansible_test-zoo-1.0.0.tar.gz
ansible-galaxy collection install ansible_test-museum-1.0.0.tar.gz -p ./install
ansible-galaxy collection install ansible_test-airport-1.0.0.tar.gz -p ./local
# Change the collection version and install to another location
sed -i -e 's#^version:.*#version: 2.5.0#' ansible_test/zoo/galaxy.yml
ansible-galaxy collection build ansible_test/zoo
ansible-galaxy collection install ansible_test-zoo-2.5.0.tar.gz -p ./local
export ANSIBLE_COLLECTIONS_PATHS=~/.ansible/collections:${galaxy_testdir}/local
f_ansible_galaxy_status \
"collection list all collections"
ansible-galaxy collection list -p ./install | tee out.txt
[[ $(grep -c ansible_test out.txt) -eq 4 ]]
f_ansible_galaxy_status \
"collection list specific collection"
ansible-galaxy collection list -p ./install ansible_test.airport | tee out.txt
[[ $(grep -c 'ansible_test\.airport' out.txt) -eq 1 ]]
f_ansible_galaxy_status \
"collection list specific collection found in multiple places"
ansible-galaxy collection list -p ./install ansible_test.zoo | tee out.txt
[[ $(grep -c 'ansible_test\.zoo' out.txt) -eq 2 ]]
f_ansible_galaxy_status \
"collection list all with duplicate paths"
ansible-galaxy collection list -p ~/.ansible/collections | tee out.txt
[[ $(grep -c '# /root/.ansible/collections/ansible_collections' out.txt) -eq 1 ]]
f_ansible_galaxy_status \
"collection list invalid collection name"
ansible-galaxy collection list -p ./install dirty.wraughten.name "$@" 2>&1 | tee out.txt || echo "expected failure"
grep 'ERROR! Invalid collection name' out.txt
f_ansible_galaxy_status \
"collection list path not found"
ansible-galaxy collection list -p ./nope "$@" 2>&1 | tee out.txt || echo "expected failure"
grep '\[WARNING\]: - the configured path' out.txt
f_ansible_galaxy_status \
"collection list missing ansible_collections dir inside path"
mkdir emptydir
ansible-galaxy collection list -p ./emptydir "$@"
rmdir emptydir
unset ANSIBLE_COLLECTIONS_PATHS
## end ansible-galaxy collection list
popd # ${galaxy_testdir}
rm -fr "${galaxy_testdir}"
rm -fr "${galaxy_local_test_role_dir}"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,218 |
Lineinfile and Replace modules do not allow for setting the temporary directory
|
The replace and lineinfile modules do not allow for temporary directories to be set with environment variables. See https://github.com/ansible/ansible/issues/24082 for a similar bug report.
!component =lib/ansible/modules/files/lineinfile.py
However, this is for the facts directory. Here's how it is reproducible:
On the target server we are trying to write to `/etc/ansible/facts.d/cloudera.fact`:
```
# ls -al / | grep etc
drwxr-xr-x. 151 root root 12288 Mar 9 08:38 etc
# ls -al /etc | grep ansible
drwxr-xr-x 3 root root 4096 Nov 22 2016 ansible
# ls -al /etc/ansible/ | grep 'facts.d'
drwxr-xr-x 2 root root 4096 Mar 13 11:53 facts.d
# ls -al /etc/ansible/facts.d/ | grep cloudera.fact
-rw-rw-r-- 1 cloudera-scm cloudera-scm 10 Mar 13 11:54 cloudera.fact
```
Here is the playbook:
```
cat playbooks/test_svcacct_lineinfile.yml
- hosts: all
become: True
become_user: cloudera-scm
gather_facts: False
tasks:
- name: Add lineinfile
lineinfile:
line: 'key=value2'
state: present
create: False
path: '/etc/ansible/facts.d/cloudera.fact'
```
This will fail since there is no way to override the directory that lineinfile hard-codes the destination directory as the one which has the file which is being edited, instead of `remote_tmp` or any of the `['TMP', 'TEMP', 'TMPDIR']` environment variables:
https://github.com/ansible/ansible-modules-core/blob/devel/files/lineinfile.py#L213
which calls
https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/basic.py#L2238
Hardcoding the `dir` param as the argument to the `tempfile.mkstemp()` call.
Is there a way that this can be changed to be able to set a custom temp dir?
|
https://github.com/ansible/ansible/issues/68218
|
https://github.com/ansible/ansible/pull/69543
|
34db57a47f875d11c4068567b9ec7ace174ec4cf
|
b8469d5c7a0b24978836d445502d735089293d3c
| 2020-03-13T17:47:17Z |
python
| 2020-05-15T19:52:17Z |
changelogs/fragments/lineinfile-use-module-tempdir.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,218 |
Lineinfile and Replace modules do not allow for setting the temporary directory
|
The replace and lineinfile modules do not allow for temporary directories to be set with environment variables. See https://github.com/ansible/ansible/issues/24082 for a similar bug report.
!component =lib/ansible/modules/files/lineinfile.py
However, this is for the facts directory. Here's how it is reproducible:
On the target server we are trying to write to `/etc/ansible/facts.d/cloudera.fact`:
```
# ls -al / | grep etc
drwxr-xr-x. 151 root root 12288 Mar 9 08:38 etc
# ls -al /etc | grep ansible
drwxr-xr-x 3 root root 4096 Nov 22 2016 ansible
# ls -al /etc/ansible/ | grep 'facts.d'
drwxr-xr-x 2 root root 4096 Mar 13 11:53 facts.d
# ls -al /etc/ansible/facts.d/ | grep cloudera.fact
-rw-rw-r-- 1 cloudera-scm cloudera-scm 10 Mar 13 11:54 cloudera.fact
```
Here is the playbook:
```
cat playbooks/test_svcacct_lineinfile.yml
- hosts: all
become: True
become_user: cloudera-scm
gather_facts: False
tasks:
- name: Add lineinfile
lineinfile:
line: 'key=value2'
state: present
create: False
path: '/etc/ansible/facts.d/cloudera.fact'
```
This will fail since there is no way to override the directory that lineinfile hard-codes the destination directory as the one which has the file which is being edited, instead of `remote_tmp` or any of the `['TMP', 'TEMP', 'TMPDIR']` environment variables:
https://github.com/ansible/ansible-modules-core/blob/devel/files/lineinfile.py#L213
which calls
https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/basic.py#L2238
Hardcoding the `dir` param as the argument to the `tempfile.mkstemp()` call.
Is there a way that this can be changed to be able to set a custom temp dir?
|
https://github.com/ansible/ansible/issues/68218
|
https://github.com/ansible/ansible/pull/69543
|
34db57a47f875d11c4068567b9ec7ace174ec4cf
|
b8469d5c7a0b24978836d445502d735089293d3c
| 2020-03-13T17:47:17Z |
python
| 2020-05-15T19:52:17Z |
lib/ansible/modules/lineinfile.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Daniel Hokka Zakrisson <[email protected]>
# Copyright: (c) 2014, Ahti Kitsik <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'core'}
DOCUMENTATION = r'''
---
module: lineinfile
short_description: Manage lines in text files
description:
- This module ensures a particular line is in a file, or replace an
existing line using a back-referenced regular expression.
- This is primarily useful when you want to change a single line in a file only.
- See the M(replace) module if you want to change multiple, similar lines
or check M(blockinfile) if you want to insert/update/remove a block of lines in a file.
For other cases, see the M(copy) or M(template) modules.
version_added: "0.7"
options:
path:
description:
- The file to modify.
- Before Ansible 2.3 this option was only usable as I(dest), I(destfile) and I(name).
type: path
required: true
aliases: [ dest, destfile, name ]
regexp:
description:
- The regular expression to look for in every line of the file.
- For C(state=present), the pattern to replace if found. Only the last line found will be replaced.
- For C(state=absent), the pattern of the line(s) to remove.
- If the regular expression is not matched, the line will be
added to the file in keeping with C(insertbefore) or C(insertafter)
settings.
- When modifying a line the regexp should typically match both the initial state of
the line as well as its state after replacement by C(line) to ensure idempotence.
- Uses Python regular expressions. See U(http://docs.python.org/2/library/re.html).
type: str
aliases: [ regex ]
version_added: '1.7'
state:
description:
- Whether the line should be there or not.
type: str
choices: [ absent, present ]
default: present
line:
description:
- The line to insert/replace into the file.
- Required for C(state=present).
- If C(backrefs) is set, may contain backreferences that will get
expanded with the C(regexp) capture groups if the regexp matches.
type: str
aliases: [ value ]
backrefs:
description:
- Used with C(state=present).
- If set, C(line) can contain backreferences (both positional and named)
that will get populated if the C(regexp) matches.
- This parameter changes the operation of the module slightly;
C(insertbefore) and C(insertafter) will be ignored, and if the C(regexp)
does not match anywhere in the file, the file will be left unchanged.
- If the C(regexp) does match, the last matching line will be replaced by
the expanded line parameter.
type: bool
default: no
version_added: "1.1"
insertafter:
description:
- Used with C(state=present).
- If specified, the line will be inserted after the last match of specified regular expression.
- If the first match is required, use(firstmatch=yes).
- A special value is available; C(EOF) for inserting the line at the end of the file.
- If specified regular expression has no matches, EOF will be used instead.
- If C(insertbefore) is set, default value C(EOF) will be ignored.
- If regular expressions are passed to both C(regexp) and C(insertafter), C(insertafter) is only honored if no match for C(regexp) is found.
- May not be used with C(backrefs) or C(insertbefore).
type: str
choices: [ EOF, '*regex*' ]
default: EOF
insertbefore:
description:
- Used with C(state=present).
- If specified, the line will be inserted before the last match of specified regular expression.
- If the first match is required, use C(firstmatch=yes).
- A value is available; C(BOF) for inserting the line at the beginning of the file.
- If specified regular expression has no matches, the line will be inserted at the end of the file.
- If regular expressions are passed to both C(regexp) and C(insertbefore), C(insertbefore) is only honored if no match for C(regexp) is found.
- May not be used with C(backrefs) or C(insertafter).
type: str
choices: [ BOF, '*regex*' ]
version_added: "1.1"
create:
description:
- Used with C(state=present).
- If specified, the file will be created if it does not already exist.
- By default it will fail if the file is missing.
type: bool
default: no
backup:
description:
- Create a backup file including the timestamp information so you can
get the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
firstmatch:
description:
- Used with C(insertafter) or C(insertbefore).
- If set, C(insertafter) and C(insertbefore) will work with the first line that matches the given regular expression.
type: bool
default: no
version_added: "2.5"
others:
description:
- All arguments accepted by the M(file) module also work here.
type: str
extends_documentation_fragment:
- files
- validate
notes:
- As of Ansible 2.3, the I(dest) option has been changed to I(path) as default, but I(dest) still works as well.
seealso:
- module: blockinfile
- module: copy
- module: file
- module: replace
- module: template
- module: win_lineinfile
author:
- Daniel Hokka Zakrissoni (@dhozac)
- Ahti Kitsik (@ahtik)
'''
EXAMPLES = r'''
# NOTE: Before 2.3, option 'dest', 'destfile' or 'name' was used instead of 'path'
- name: Ensure SELinux is set to enforcing mode
lineinfile:
path: /etc/selinux/config
regexp: '^SELINUX='
line: SELINUX=enforcing
- name: Make sure group wheel is not in the sudoers configuration
lineinfile:
path: /etc/sudoers
state: absent
regexp: '^%wheel'
- name: Replace a localhost entry with our own
lineinfile:
path: /etc/hosts
regexp: '^127\.0\.0\.1'
line: 127.0.0.1 localhost
owner: root
group: root
mode: '0644'
- name: Ensure the default Apache port is 8080
lineinfile:
path: /etc/httpd/conf/httpd.conf
regexp: '^Listen '
insertafter: '^#Listen '
line: Listen 8080
- name: Ensure we have our own comment added to /etc/services
lineinfile:
path: /etc/services
regexp: '^# port for http'
insertbefore: '^www.*80/tcp'
line: '# port for http by default'
- name: Add a line to a file if the file does not exist, without passing regexp
lineinfile:
path: /tmp/testfile
line: 192.168.1.99 foo.lab.net foo
create: yes
# NOTE: Yaml requires escaping backslashes in double quotes but not in single quotes
- name: Ensure the JBoss memory settings are exactly as needed
lineinfile:
path: /opt/jboss-as/bin/standalone.conf
regexp: '^(.*)Xms(\d+)m(.*)$'
line: '\1Xms${xms}m\3'
backrefs: yes
# NOTE: Fully quoted because of the ': ' on the line. See the Gotchas in the YAML docs.
- name: Validate the sudoers file before saving
lineinfile:
path: /etc/sudoers
state: present
regexp: '^%ADMIN ALL='
line: '%ADMIN ALL=(ALL) NOPASSWD: ALL'
validate: /usr/sbin/visudo -cf %s
'''
import os
import re
import tempfile
# import module snippets
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_bytes, to_native
def write_changes(module, b_lines, dest):
tmpfd, tmpfile = tempfile.mkstemp()
with os.fdopen(tmpfd, 'wb') as f:
f.writelines(b_lines)
validate = module.params.get('validate', None)
valid = not validate
if validate:
if "%s" not in validate:
module.fail_json(msg="validate must contain %%s: %s" % (validate))
(rc, out, err) = module.run_command(to_bytes(validate % tmpfile, errors='surrogate_or_strict'))
valid = rc == 0
if rc != 0:
module.fail_json(msg='failed to validate: '
'rc:%s error:%s' % (rc, err))
if valid:
module.atomic_move(tmpfile,
to_native(os.path.realpath(to_bytes(dest, errors='surrogate_or_strict')), errors='surrogate_or_strict'),
unsafe_writes=module.params['unsafe_writes'])
def check_file_attrs(module, changed, message, diff):
file_args = module.load_file_common_arguments(module.params)
if module.set_fs_attributes_if_different(file_args, False, diff=diff):
if changed:
message += " and "
changed = True
message += "ownership, perms or SE linux context changed"
return message, changed
def present(module, dest, regexp, line, insertafter, insertbefore, create,
backup, backrefs, firstmatch):
diff = {'before': '',
'after': '',
'before_header': '%s (content)' % dest,
'after_header': '%s (content)' % dest}
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if not os.path.exists(b_dest):
if not create:
module.fail_json(rc=257, msg='Destination %s does not exist !' % dest)
b_destpath = os.path.dirname(b_dest)
if b_destpath and not os.path.exists(b_destpath) and not module.check_mode:
try:
os.makedirs(b_destpath)
except Exception as e:
module.fail_json(msg='Error creating %s Error code: %s Error description: %s' % (b_destpath, e[0], e[1]))
b_lines = []
else:
with open(b_dest, 'rb') as f:
b_lines = f.readlines()
if module._diff:
diff['before'] = to_native(b''.join(b_lines))
if regexp is not None:
bre_m = re.compile(to_bytes(regexp, errors='surrogate_or_strict'))
if insertafter not in (None, 'BOF', 'EOF'):
bre_ins = re.compile(to_bytes(insertafter, errors='surrogate_or_strict'))
elif insertbefore not in (None, 'BOF'):
bre_ins = re.compile(to_bytes(insertbefore, errors='surrogate_or_strict'))
else:
bre_ins = None
# index[0] is the line num where regexp has been found
# index[1] is the line num where insertafter/insertbefore has been found
index = [-1, -1]
match = None
exact_line_match = False
b_line = to_bytes(line, errors='surrogate_or_strict')
# The module's doc says
# "If regular expressions are passed to both regexp and
# insertafter, insertafter is only honored if no match for regexp is found."
# Therefore:
# 1. regexp was found -> ignore insertafter, replace the founded line
# 2. regexp was not found -> insert the line after 'insertafter' or 'insertbefore' line
# Given the above:
# 1. First check that there is no match for regexp:
if regexp is not None:
for lineno, b_cur_line in enumerate(b_lines):
match_found = bre_m.search(b_cur_line)
if match_found:
index[0] = lineno
match = match_found
if firstmatch:
break
# 2. When no match found on the previous step,
# parse for searching insertafter/insertbefore:
if not match:
for lineno, b_cur_line in enumerate(b_lines):
if b_line == b_cur_line.rstrip(b'\r\n'):
index[0] = lineno
exact_line_match = True
elif bre_ins is not None and bre_ins.search(b_cur_line):
if insertafter:
# + 1 for the next line
index[1] = lineno + 1
if firstmatch:
break
if insertbefore:
# index[1] for the previous line
index[1] = lineno
if firstmatch:
break
msg = ''
changed = False
b_linesep = to_bytes(os.linesep, errors='surrogate_or_strict')
# Exact line or Regexp matched a line in the file
if index[0] != -1:
if backrefs and match:
b_new_line = match.expand(b_line)
else:
# Don't do backref expansion if not asked.
b_new_line = b_line
if not b_new_line.endswith(b_linesep):
b_new_line += b_linesep
# If no regexp was given and no line match is found anywhere in the file,
# insert the line appropriately if using insertbefore or insertafter
if regexp is None and match is None and not exact_line_match:
# Insert lines
if insertafter and insertafter != 'EOF':
# Ensure there is a line separator after the found string
# at the end of the file.
if b_lines and not b_lines[-1][-1:] in (b'\n', b'\r'):
b_lines[-1] = b_lines[-1] + b_linesep
# If the line to insert after is at the end of the file
# use the appropriate index value.
if len(b_lines) == index[1]:
if b_lines[index[1] - 1].rstrip(b'\r\n') != b_line:
b_lines.append(b_line + b_linesep)
msg = 'line added'
changed = True
elif b_lines[index[1]].rstrip(b'\r\n') != b_line:
b_lines.insert(index[1], b_line + b_linesep)
msg = 'line added'
changed = True
elif insertbefore and insertbefore != 'BOF':
# If the line to insert before is at the beginning of the file
# use the appropriate index value.
if index[1] <= 0:
if b_lines[index[1]].rstrip(b'\r\n') != b_line:
b_lines.insert(index[1], b_line + b_linesep)
msg = 'line added'
changed = True
elif b_lines[index[1] - 1].rstrip(b'\r\n') != b_line:
b_lines.insert(index[1], b_line + b_linesep)
msg = 'line added'
changed = True
elif b_lines[index[0]] != b_new_line:
b_lines[index[0]] = b_new_line
msg = 'line replaced'
changed = True
elif backrefs:
# Do absolutely nothing, since it's not safe generating the line
# without the regexp matching to populate the backrefs.
pass
# Add it to the beginning of the file
elif insertbefore == 'BOF' or insertafter == 'BOF':
b_lines.insert(0, b_line + b_linesep)
msg = 'line added'
changed = True
# Add it to the end of the file if requested or
# if insertafter/insertbefore didn't match anything
# (so default behaviour is to add at the end)
elif insertafter == 'EOF' or index[1] == -1:
# If the file is not empty then ensure there's a newline before the added line
if b_lines and not b_lines[-1][-1:] in (b'\n', b'\r'):
b_lines.append(b_linesep)
b_lines.append(b_line + b_linesep)
msg = 'line added'
changed = True
elif insertafter and index[1] != -1:
# Don't insert the line if it already matches at the index.
# If the line to insert after is at the end of the file use the appropriate index value.
if len(b_lines) == index[1]:
if b_lines[index[1] - 1].rstrip(b'\r\n') != b_line:
b_lines.append(b_line + b_linesep)
msg = 'line added'
changed = True
elif b_line != b_lines[index[1]].rstrip(b'\n\r'):
b_lines.insert(index[1], b_line + b_linesep)
msg = 'line added'
changed = True
# insert matched, but not the regexp
else:
b_lines.insert(index[1], b_line + b_linesep)
msg = 'line added'
changed = True
if module._diff:
diff['after'] = to_native(b''.join(b_lines))
backupdest = ""
if changed and not module.check_mode:
if backup and os.path.exists(b_dest):
backupdest = module.backup_local(dest)
write_changes(module, b_lines, dest)
if module.check_mode and not os.path.exists(b_dest):
module.exit_json(changed=changed, msg=msg, backup=backupdest, diff=diff)
attr_diff = {}
msg, changed = check_file_attrs(module, changed, msg, attr_diff)
attr_diff['before_header'] = '%s (file attributes)' % dest
attr_diff['after_header'] = '%s (file attributes)' % dest
difflist = [diff, attr_diff]
module.exit_json(changed=changed, msg=msg, backup=backupdest, diff=difflist)
def absent(module, dest, regexp, line, backup):
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if not os.path.exists(b_dest):
module.exit_json(changed=False, msg="file not present")
msg = ''
diff = {'before': '',
'after': '',
'before_header': '%s (content)' % dest,
'after_header': '%s (content)' % dest}
with open(b_dest, 'rb') as f:
b_lines = f.readlines()
if module._diff:
diff['before'] = to_native(b''.join(b_lines))
if regexp is not None:
bre_c = re.compile(to_bytes(regexp, errors='surrogate_or_strict'))
found = []
b_line = to_bytes(line, errors='surrogate_or_strict')
def matcher(b_cur_line):
if regexp is not None:
match_found = bre_c.search(b_cur_line)
else:
match_found = b_line == b_cur_line.rstrip(b'\r\n')
if match_found:
found.append(b_cur_line)
return not match_found
b_lines = [l for l in b_lines if matcher(l)]
changed = len(found) > 0
if module._diff:
diff['after'] = to_native(b''.join(b_lines))
backupdest = ""
if changed and not module.check_mode:
if backup:
backupdest = module.backup_local(dest)
write_changes(module, b_lines, dest)
if changed:
msg = "%s line(s) removed" % len(found)
attr_diff = {}
msg, changed = check_file_attrs(module, changed, msg, attr_diff)
attr_diff['before_header'] = '%s (file attributes)' % dest
attr_diff['after_header'] = '%s (file attributes)' % dest
difflist = [diff, attr_diff]
module.exit_json(changed=changed, found=len(found), msg=msg, backup=backupdest, diff=difflist)
def main():
module = AnsibleModule(
argument_spec=dict(
path=dict(type='path', required=True, aliases=['dest', 'destfile', 'name']),
state=dict(type='str', default='present', choices=['absent', 'present']),
regexp=dict(type='str', aliases=['regex']),
line=dict(type='str', aliases=['value']),
insertafter=dict(type='str'),
insertbefore=dict(type='str'),
backrefs=dict(type='bool', default=False),
create=dict(type='bool', default=False),
backup=dict(type='bool', default=False),
firstmatch=dict(type='bool', default=False),
validate=dict(type='str'),
),
mutually_exclusive=[['insertbefore', 'insertafter']],
add_file_common_args=True,
supports_check_mode=True,
)
params = module.params
create = params['create']
backup = params['backup']
backrefs = params['backrefs']
path = params['path']
firstmatch = params['firstmatch']
regexp = params['regexp']
line = params['line']
if regexp == '':
module.warn(
"The regular expression is an empty string, which will match every line in the file. "
"This may have unintended consequences, such as replacing the last line in the file rather than appending. "
"If this is desired, use '^' to match every line in the file and avoid this warning.")
b_path = to_bytes(path, errors='surrogate_or_strict')
if os.path.isdir(b_path):
module.fail_json(rc=256, msg='Path %s is a directory !' % path)
if params['state'] == 'present':
if backrefs and regexp is None:
module.fail_json(msg='regexp is required with backrefs=true')
if line is None:
module.fail_json(msg='line is required with state=present')
# Deal with the insertafter default value manually, to avoid errors
# because of the mutually_exclusive mechanism.
ins_bef, ins_aft = params['insertbefore'], params['insertafter']
if ins_bef is None and ins_aft is None:
ins_aft = 'EOF'
present(module, path, regexp, line,
ins_aft, ins_bef, create, backup, backrefs, firstmatch)
else:
if regexp is None and line is None:
module.fail_json(msg='one of line or regexp is required with state=absent')
absent(module, path, regexp, line, backup)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,673 |
ansible-galaxy install user experience is disastrous if you use roles and collections
|
##### SUMMARY
If I create one `requirements.yml` file which lists all my Ansible dependencies for a given project, including roles and collections from Galaxy, and try to install the dependencies using `ansible-galaxy`, it results in an unexpected and unintuitive behavior.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
```
ansible 2.9.5
config file = /Users/jgeerling/Downloads/blend-test/ansible.cfg
configured module search path = ['/Users/jgeerling/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.6 (default, Feb 9 2020, 13:28:08) [Clang 11.0.0 (clang-1100.0.33.17)]
```
##### CONFIGURATION
```
ANSIBLE_NOCOWS(/Users/jgeerling/Downloads/blend-test/ansible.cfg) = True
COLLECTIONS_PATHS(/Users/jgeerling/Downloads/blend-test/ansible.cfg) = ['/Users/jgeerling/Downloads/blend-test']
DEFAULT_ROLES_PATH(/Users/jgeerling/Downloads/blend-test/ansible.cfg) = ['/Users/jgeerling/Downloads/blend-test/roles']
```
##### OS / ENVIRONMENT
- macOS Catalina and Ubuntu
##### STEPS TO REPRODUCE
Create a new playbook project directory (e.g. 'example'). Inside the directory, create a `requirements.yml` file with the contents:
```yaml
---
roles:
# Install a role from Ansible Galaxy.
- name: geerlingguy.java
version: 1.9.7
collections:
# Install a collection from Ansible Galaxy.
- name: geerlingguy.php_roles
version: 0.9.5
source: https://galaxy.ansible.com
```
Run:
ansible-galaxy install -r requirements.yml
##### EXPECTED RESULTS
All my project dependencies are installed as listed in the `requirements.yml` file.
##### ACTUAL RESULTS
```
$ ansible-galaxy install -r requirements.yml
- downloading role 'java', owned by geerlingguy
- downloading role from https://github.com/geerlingguy/ansible-role-java/archive/1.9.7.tar.gz
- extracting geerlingguy.java to /Users/jgeerling/.ansible/roles/geerlingguy.java
- geerlingguy.java (1.9.7) was installed successfully
$ echo $?
0
```
This command works, and results in only the defined _role_ to install. So after being confused as to why the _collection_ doesn't get installed, I read through the Collections documentation and find that, if I want collections installed, I have to run a separate command. To illustrate:
```
# This will only install roles (and give no warning that collections were detected but not installed).
$ ansible-galaxy install -r requirements.yml
# This results in the same behavior as above.
$ ansible-galaxy role install -r requirements.yml
# This will only install collections (and give no warning that roles were detected but not installed).
$ ansible-galaxy collection install -r requirements.yml
```
Because most of my projects will either require roles or roles and collections in the `requirements.yml` file in the coming months/years (and will likely remain that way for the next few years as many roles on Galaxy that I depend on are not making the jump to Collections (especially simpler ones that are minimally maintained), this UX is kind of difficult to stomach, as now I'll need to make sure to have users do _two_ things every time they use or update one of my playbook projects (and CI will also need to run two commands).
It would make more sense to me to have the following behavior:
```
# Installs everything in requirements.yml (collections and roles).
$ ansible-galaxy install -r requirements.yml
# Only installs roles in requirements.yml (displays warning that there are also collections present in the file, if there are any).
$ ansible-galaxy role install -r requirements.yml
# Only installs collections in requirements.yml (displays warning that there are also roles present in the file, if there are any).
$ ansible-galaxy collection install -r requirements.yml
```
|
https://github.com/ansible/ansible/issues/65673
|
https://github.com/ansible/ansible/pull/67843
|
01e7915b0a9778a934a0f0e9e9d110dbef7e31ec
|
ecea15c508f0e081525be036cf76bbb56dbcdd9d
| 2019-12-09T19:48:23Z |
python
| 2020-05-18T19:09:42Z |
changelogs/fragments/ansible-galaxy-install.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,673 |
ansible-galaxy install user experience is disastrous if you use roles and collections
|
##### SUMMARY
If I create one `requirements.yml` file which lists all my Ansible dependencies for a given project, including roles and collections from Galaxy, and try to install the dependencies using `ansible-galaxy`, it results in an unexpected and unintuitive behavior.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
```
ansible 2.9.5
config file = /Users/jgeerling/Downloads/blend-test/ansible.cfg
configured module search path = ['/Users/jgeerling/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.6 (default, Feb 9 2020, 13:28:08) [Clang 11.0.0 (clang-1100.0.33.17)]
```
##### CONFIGURATION
```
ANSIBLE_NOCOWS(/Users/jgeerling/Downloads/blend-test/ansible.cfg) = True
COLLECTIONS_PATHS(/Users/jgeerling/Downloads/blend-test/ansible.cfg) = ['/Users/jgeerling/Downloads/blend-test']
DEFAULT_ROLES_PATH(/Users/jgeerling/Downloads/blend-test/ansible.cfg) = ['/Users/jgeerling/Downloads/blend-test/roles']
```
##### OS / ENVIRONMENT
- macOS Catalina and Ubuntu
##### STEPS TO REPRODUCE
Create a new playbook project directory (e.g. 'example'). Inside the directory, create a `requirements.yml` file with the contents:
```yaml
---
roles:
# Install a role from Ansible Galaxy.
- name: geerlingguy.java
version: 1.9.7
collections:
# Install a collection from Ansible Galaxy.
- name: geerlingguy.php_roles
version: 0.9.5
source: https://galaxy.ansible.com
```
Run:
ansible-galaxy install -r requirements.yml
##### EXPECTED RESULTS
All my project dependencies are installed as listed in the `requirements.yml` file.
##### ACTUAL RESULTS
```
$ ansible-galaxy install -r requirements.yml
- downloading role 'java', owned by geerlingguy
- downloading role from https://github.com/geerlingguy/ansible-role-java/archive/1.9.7.tar.gz
- extracting geerlingguy.java to /Users/jgeerling/.ansible/roles/geerlingguy.java
- geerlingguy.java (1.9.7) was installed successfully
$ echo $?
0
```
This command works, and results in only the defined _role_ to install. So after being confused as to why the _collection_ doesn't get installed, I read through the Collections documentation and find that, if I want collections installed, I have to run a separate command. To illustrate:
```
# This will only install roles (and give no warning that collections were detected but not installed).
$ ansible-galaxy install -r requirements.yml
# This results in the same behavior as above.
$ ansible-galaxy role install -r requirements.yml
# This will only install collections (and give no warning that roles were detected but not installed).
$ ansible-galaxy collection install -r requirements.yml
```
Because most of my projects will either require roles or roles and collections in the `requirements.yml` file in the coming months/years (and will likely remain that way for the next few years as many roles on Galaxy that I depend on are not making the jump to Collections (especially simpler ones that are minimally maintained), this UX is kind of difficult to stomach, as now I'll need to make sure to have users do _two_ things every time they use or update one of my playbook projects (and CI will also need to run two commands).
It would make more sense to me to have the following behavior:
```
# Installs everything in requirements.yml (collections and roles).
$ ansible-galaxy install -r requirements.yml
# Only installs roles in requirements.yml (displays warning that there are also collections present in the file, if there are any).
$ ansible-galaxy role install -r requirements.yml
# Only installs collections in requirements.yml (displays warning that there are also roles present in the file, if there are any).
$ ansible-galaxy collection install -r requirements.yml
```
|
https://github.com/ansible/ansible/issues/65673
|
https://github.com/ansible/ansible/pull/67843
|
01e7915b0a9778a934a0f0e9e9d110dbef7e31ec
|
ecea15c508f0e081525be036cf76bbb56dbcdd9d
| 2019-12-09T19:48:23Z |
python
| 2020-05-18T19:09:42Z |
docs/docsite/rst/shared_snippets/installing_multiple_collections.txt
|
You can also setup a ``requirements.yml`` file to install multiple collections in one command. This file is a YAML file in the format:
.. code-block:: yaml+jinja
---
collections:
# With just the collection name
- my_namespace.my_collection
# With the collection name, version, and source options
- name: my_namespace.my_other_collection
version: 'version range identifiers (default: ``*``)'
source: 'The Galaxy URL to pull the collection from (default: ``--api-server`` from cmdline)'
The ``version`` key can take in the same range identifier format documented above.
Roles can also be specified and placed under the ``roles`` key. The values follow the same format as a requirements
file used in older Ansible releases.
.. code-block:: yaml
---
roles:
# Install a role from Ansible Galaxy.
- name: geerlingguy.java
version: 1.9.6
collections:
# Install a collection from Ansible Galaxy.
- name: geerlingguy.php_roles
version: 0.9.3
source: https://galaxy.ansible.com
.. note::
While both roles and collections can be specified in one requirements file, they need to be installed separately.
The ``ansible-galaxy role install -r requirements.yml`` will only install roles and
``ansible-galaxy collection install -r requirements.yml`` will only install collections.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,673 |
ansible-galaxy install user experience is disastrous if you use roles and collections
|
##### SUMMARY
If I create one `requirements.yml` file which lists all my Ansible dependencies for a given project, including roles and collections from Galaxy, and try to install the dependencies using `ansible-galaxy`, it results in an unexpected and unintuitive behavior.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
```
ansible 2.9.5
config file = /Users/jgeerling/Downloads/blend-test/ansible.cfg
configured module search path = ['/Users/jgeerling/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.6 (default, Feb 9 2020, 13:28:08) [Clang 11.0.0 (clang-1100.0.33.17)]
```
##### CONFIGURATION
```
ANSIBLE_NOCOWS(/Users/jgeerling/Downloads/blend-test/ansible.cfg) = True
COLLECTIONS_PATHS(/Users/jgeerling/Downloads/blend-test/ansible.cfg) = ['/Users/jgeerling/Downloads/blend-test']
DEFAULT_ROLES_PATH(/Users/jgeerling/Downloads/blend-test/ansible.cfg) = ['/Users/jgeerling/Downloads/blend-test/roles']
```
##### OS / ENVIRONMENT
- macOS Catalina and Ubuntu
##### STEPS TO REPRODUCE
Create a new playbook project directory (e.g. 'example'). Inside the directory, create a `requirements.yml` file with the contents:
```yaml
---
roles:
# Install a role from Ansible Galaxy.
- name: geerlingguy.java
version: 1.9.7
collections:
# Install a collection from Ansible Galaxy.
- name: geerlingguy.php_roles
version: 0.9.5
source: https://galaxy.ansible.com
```
Run:
ansible-galaxy install -r requirements.yml
##### EXPECTED RESULTS
All my project dependencies are installed as listed in the `requirements.yml` file.
##### ACTUAL RESULTS
```
$ ansible-galaxy install -r requirements.yml
- downloading role 'java', owned by geerlingguy
- downloading role from https://github.com/geerlingguy/ansible-role-java/archive/1.9.7.tar.gz
- extracting geerlingguy.java to /Users/jgeerling/.ansible/roles/geerlingguy.java
- geerlingguy.java (1.9.7) was installed successfully
$ echo $?
0
```
This command works, and results in only the defined _role_ to install. So after being confused as to why the _collection_ doesn't get installed, I read through the Collections documentation and find that, if I want collections installed, I have to run a separate command. To illustrate:
```
# This will only install roles (and give no warning that collections were detected but not installed).
$ ansible-galaxy install -r requirements.yml
# This results in the same behavior as above.
$ ansible-galaxy role install -r requirements.yml
# This will only install collections (and give no warning that roles were detected but not installed).
$ ansible-galaxy collection install -r requirements.yml
```
Because most of my projects will either require roles or roles and collections in the `requirements.yml` file in the coming months/years (and will likely remain that way for the next few years as many roles on Galaxy that I depend on are not making the jump to Collections (especially simpler ones that are minimally maintained), this UX is kind of difficult to stomach, as now I'll need to make sure to have users do _two_ things every time they use or update one of my playbook projects (and CI will also need to run two commands).
It would make more sense to me to have the following behavior:
```
# Installs everything in requirements.yml (collections and roles).
$ ansible-galaxy install -r requirements.yml
# Only installs roles in requirements.yml (displays warning that there are also collections present in the file, if there are any).
$ ansible-galaxy role install -r requirements.yml
# Only installs collections in requirements.yml (displays warning that there are also roles present in the file, if there are any).
$ ansible-galaxy collection install -r requirements.yml
```
|
https://github.com/ansible/ansible/issues/65673
|
https://github.com/ansible/ansible/pull/67843
|
01e7915b0a9778a934a0f0e9e9d110dbef7e31ec
|
ecea15c508f0e081525be036cf76bbb56dbcdd9d
| 2019-12-09T19:48:23Z |
python
| 2020-05-18T19:09:42Z |
lib/ansible/cli/galaxy.py
|
# Copyright: (c) 2013, James Cammarata <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os.path
import re
import shutil
import textwrap
import time
import yaml
from yaml.error import YAMLError
import ansible.constants as C
from ansible import context
from ansible.cli import CLI
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.galaxy import Galaxy, get_collections_galaxy_meta_info
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection import (
build_collection,
CollectionRequirement,
download_collections,
find_existing_collections,
install_collections,
publish_collection,
validate_collection_name,
validate_collection_path,
verify_collections
)
from ansible.galaxy.login import GalaxyLogin
from ansible.galaxy.role import GalaxyRole
from ansible.galaxy.token import BasicAuthToken, GalaxyToken, KeycloakToken, NoTokenSentinel
from ansible.module_utils.ansible_release import __version__ as ansible_version
from ansible.module_utils.common.collections import is_iterable
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils import six
from ansible.parsing.dataloader import DataLoader
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.playbook.role.requirement import RoleRequirement
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.plugin_docs import get_versioned_doclink
display = Display()
urlparse = six.moves.urllib.parse.urlparse
def _display_header(path, h1, h2, w1=10, w2=7):
display.display('\n# {0}\n{1:{cwidth}} {2:{vwidth}}\n{3} {4}\n'.format(
path,
h1,
h2,
'-' * max([len(h1), w1]), # Make sure that the number of dashes is at least the width of the header
'-' * max([len(h2), w2]),
cwidth=w1,
vwidth=w2,
))
def _display_role(gr):
install_info = gr.install_info
version = None
if install_info:
version = install_info.get("version", None)
if not version:
version = "(unknown version)"
display.display("- %s, %s" % (gr.name, version))
def _display_collection(collection, cwidth=10, vwidth=7, min_cwidth=10, min_vwidth=7):
display.display('{fqcn:{cwidth}} {version:{vwidth}}'.format(
fqcn=to_text(collection),
version=collection.latest_version,
cwidth=max(cwidth, min_cwidth), # Make sure the width isn't smaller than the header
vwidth=max(vwidth, min_vwidth)
))
def _get_collection_widths(collections):
if is_iterable(collections):
fqcn_set = set(to_text(c) for c in collections)
version_set = set(to_text(c.latest_version) for c in collections)
else:
fqcn_set = set([to_text(collections)])
version_set = set([collections.latest_version])
fqcn_length = len(max(fqcn_set, key=len))
version_length = len(max(version_set, key=len))
return fqcn_length, version_length
class GalaxyCLI(CLI):
'''command to manage Ansible roles in shared repositories, the default of which is Ansible Galaxy *https://galaxy.ansible.com*.'''
SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url")
def __init__(self, args):
# Inject role into sys.argv[1] as a backwards compatibility step
if len(args) > 1 and args[1] not in ['-h', '--help', '--version'] and 'role' not in args and 'collection' not in args:
# TODO: Should we add a warning here and eventually deprecate the implicit role subcommand choice
# Remove this in Ansible 2.13 when we also remove -v as an option on the root parser for ansible-galaxy.
idx = 2 if args[1].startswith('-v') else 1
args.insert(idx, 'role')
self.api_servers = []
self.galaxy = None
super(GalaxyCLI, self).__init__(args)
def init_parser(self):
''' create an options parser for bin/ansible '''
super(GalaxyCLI, self).init_parser(
desc="Perform various Role and Collection related operations.",
)
# Common arguments that apply to more than 1 action
common = opt_help.argparse.ArgumentParser(add_help=False)
common.add_argument('-s', '--server', dest='api_server', help='The Galaxy API server URL')
common.add_argument('--token', '--api-key', dest='api_key',
help='The Ansible Galaxy API key which can be found at '
'https://galaxy.ansible.com/me/preferences. You can also use ansible-galaxy login to '
'retrieve this key or set the token for the GALAXY_SERVER_LIST entry.')
common.add_argument('-c', '--ignore-certs', action='store_true', dest='ignore_certs',
default=C.GALAXY_IGNORE_CERTS, help='Ignore SSL certificate validation errors.')
opt_help.add_verbosity_options(common)
force = opt_help.argparse.ArgumentParser(add_help=False)
force.add_argument('-f', '--force', dest='force', action='store_true', default=False,
help='Force overwriting an existing role or collection')
github = opt_help.argparse.ArgumentParser(add_help=False)
github.add_argument('github_user', help='GitHub username')
github.add_argument('github_repo', help='GitHub repository')
offline = opt_help.argparse.ArgumentParser(add_help=False)
offline.add_argument('--offline', dest='offline', default=False, action='store_true',
help="Don't query the galaxy API when creating roles")
default_roles_path = C.config.get_configuration_definition('DEFAULT_ROLES_PATH').get('default', '')
roles_path = opt_help.argparse.ArgumentParser(add_help=False)
roles_path.add_argument('-p', '--roles-path', dest='roles_path', type=opt_help.unfrack_path(pathsep=True),
default=C.DEFAULT_ROLES_PATH, action=opt_help.PrependListAction,
help='The path to the directory containing your roles. The default is the first '
'writable one configured via DEFAULT_ROLES_PATH: %s ' % default_roles_path)
collections_path = opt_help.argparse.ArgumentParser(add_help=False)
collections_path.add_argument('-p', '--collection-path', dest='collections_path', type=opt_help.unfrack_path(pathsep=True),
default=C.COLLECTIONS_PATHS, action=opt_help.PrependListAction,
help="One or more directories to search for collections in addition "
"to the default COLLECTIONS_PATHS. Separate multiple paths "
"with '{0}'.".format(os.path.pathsep))
# Add sub parser for the Galaxy role type (role or collection)
type_parser = self.parser.add_subparsers(metavar='TYPE', dest='type')
type_parser.required = True
# Add sub parser for the Galaxy collection actions
collection = type_parser.add_parser('collection', help='Manage an Ansible Galaxy collection.')
collection_parser = collection.add_subparsers(metavar='COLLECTION_ACTION', dest='action')
collection_parser.required = True
self.add_download_options(collection_parser, parents=[common])
self.add_init_options(collection_parser, parents=[common, force])
self.add_build_options(collection_parser, parents=[common, force])
self.add_publish_options(collection_parser, parents=[common])
self.add_install_options(collection_parser, parents=[common, force])
self.add_list_options(collection_parser, parents=[common, collections_path])
self.add_verify_options(collection_parser, parents=[common, collections_path])
# Add sub parser for the Galaxy role actions
role = type_parser.add_parser('role', help='Manage an Ansible Galaxy role.')
role_parser = role.add_subparsers(metavar='ROLE_ACTION', dest='action')
role_parser.required = True
self.add_init_options(role_parser, parents=[common, force, offline])
self.add_remove_options(role_parser, parents=[common, roles_path])
self.add_delete_options(role_parser, parents=[common, github])
self.add_list_options(role_parser, parents=[common, roles_path])
self.add_search_options(role_parser, parents=[common])
self.add_import_options(role_parser, parents=[common, github])
self.add_setup_options(role_parser, parents=[common, roles_path])
self.add_login_options(role_parser, parents=[common])
self.add_info_options(role_parser, parents=[common, roles_path, offline])
self.add_install_options(role_parser, parents=[common, force, roles_path])
def add_download_options(self, parser, parents=None):
download_parser = parser.add_parser('download', parents=parents,
help='Download collections and their dependencies as a tarball for an '
'offline install.')
download_parser.set_defaults(func=self.execute_download)
download_parser.add_argument('args', help='Collection(s)', metavar='collection', nargs='*')
download_parser.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download collection(s) listed as dependencies.")
download_parser.add_argument('-p', '--download-path', dest='download_path',
default='./collections',
help='The directory to download the collections to.')
download_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be downloaded.')
download_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
def add_init_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
init_parser = parser.add_parser('init', parents=parents,
help='Initialize new {0} with the base structure of a '
'{0}.'.format(galaxy_type))
init_parser.set_defaults(func=self.execute_init)
init_parser.add_argument('--init-path', dest='init_path', default='./',
help='The path in which the skeleton {0} will be created. The default is the '
'current working directory.'.format(galaxy_type))
init_parser.add_argument('--{0}-skeleton'.format(galaxy_type), dest='{0}_skeleton'.format(galaxy_type),
default=C.GALAXY_ROLE_SKELETON,
help='The path to a {0} skeleton that the new {0} should be based '
'upon.'.format(galaxy_type))
obj_name_kwargs = {}
if galaxy_type == 'collection':
obj_name_kwargs['type'] = validate_collection_name
init_parser.add_argument('{0}_name'.format(galaxy_type), help='{0} name'.format(galaxy_type.capitalize()),
**obj_name_kwargs)
if galaxy_type == 'role':
init_parser.add_argument('--type', dest='role_type', action='store', default='default',
help="Initialize using an alternate role type. Valid types include: 'container', "
"'apb' and 'network'.")
def add_remove_options(self, parser, parents=None):
remove_parser = parser.add_parser('remove', parents=parents, help='Delete roles from roles_path.')
remove_parser.set_defaults(func=self.execute_remove)
remove_parser.add_argument('args', help='Role(s)', metavar='role', nargs='+')
def add_delete_options(self, parser, parents=None):
delete_parser = parser.add_parser('delete', parents=parents,
help='Removes the role from Galaxy. It does not remove or alter the actual '
'GitHub repository.')
delete_parser.set_defaults(func=self.execute_delete)
def add_list_options(self, parser, parents=None):
galaxy_type = 'role'
if parser.metavar == 'COLLECTION_ACTION':
galaxy_type = 'collection'
list_parser = parser.add_parser('list', parents=parents,
help='Show the name and version of each {0} installed in the {0}s_path.'.format(galaxy_type))
list_parser.set_defaults(func=self.execute_list)
list_parser.add_argument(galaxy_type, help=galaxy_type.capitalize(), nargs='?', metavar=galaxy_type)
def add_search_options(self, parser, parents=None):
search_parser = parser.add_parser('search', parents=parents,
help='Search the Galaxy database by tags, platforms, author and multiple '
'keywords.')
search_parser.set_defaults(func=self.execute_search)
search_parser.add_argument('--platforms', dest='platforms', help='list of OS platforms to filter by')
search_parser.add_argument('--galaxy-tags', dest='galaxy_tags', help='list of galaxy tags to filter by')
search_parser.add_argument('--author', dest='author', help='GitHub username')
search_parser.add_argument('args', help='Search terms', metavar='searchterm', nargs='*')
def add_import_options(self, parser, parents=None):
import_parser = parser.add_parser('import', parents=parents, help='Import a role into a galaxy server')
import_parser.set_defaults(func=self.execute_import)
import_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import results.")
import_parser.add_argument('--branch', dest='reference',
help='The name of a branch to import. Defaults to the repository\'s default branch '
'(usually master)')
import_parser.add_argument('--role-name', dest='role_name',
help='The name the role should have, if different than the repo name')
import_parser.add_argument('--status', dest='check_status', action='store_true', default=False,
help='Check the status of the most recent import request for given github_'
'user/github_repo.')
def add_setup_options(self, parser, parents=None):
setup_parser = parser.add_parser('setup', parents=parents,
help='Manage the integration between Galaxy and the given source.')
setup_parser.set_defaults(func=self.execute_setup)
setup_parser.add_argument('--remove', dest='remove_id', default=None,
help='Remove the integration matching the provided ID value. Use --list to see '
'ID values.')
setup_parser.add_argument('--list', dest="setup_list", action='store_true', default=False,
help='List all of your integrations.')
setup_parser.add_argument('source', help='Source')
setup_parser.add_argument('github_user', help='GitHub username')
setup_parser.add_argument('github_repo', help='GitHub repository')
setup_parser.add_argument('secret', help='Secret')
def add_login_options(self, parser, parents=None):
login_parser = parser.add_parser('login', parents=parents,
help="Login to api.github.com server in order to use ansible-galaxy role sub "
"command such as 'import', 'delete', 'publish', and 'setup'")
login_parser.set_defaults(func=self.execute_login)
login_parser.add_argument('--github-token', dest='token', default=None,
help='Identify with github token rather than username and password.')
def add_info_options(self, parser, parents=None):
info_parser = parser.add_parser('info', parents=parents, help='View more details about a specific role.')
info_parser.set_defaults(func=self.execute_info)
info_parser.add_argument('args', nargs='+', help='role', metavar='role_name[,version]')
def add_verify_options(self, parser, parents=None):
galaxy_type = 'collection'
verify_parser = parser.add_parser('verify', parents=parents, help='Compare checksums with the collection(s) '
'found on the server and the installed copy. This does not verify dependencies.')
verify_parser.set_defaults(func=self.execute_verify)
verify_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', help='The collection(s) name or '
'path/url to a tar.gz collection artifact. This is mutually exclusive with --requirements-file.')
verify_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help='Ignore errors during verification and continue with the next specified collection.')
verify_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be verified.')
def add_install_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
args_kwargs = {}
if galaxy_type == 'collection':
args_kwargs['help'] = 'The collection(s) name or path/url to a tar.gz collection artifact. This is ' \
'mutually exclusive with --requirements-file.'
ignore_errors_help = 'Ignore errors during installation and continue with the next specified ' \
'collection. This will not ignore dependency conflict errors.'
else:
args_kwargs['help'] = 'Role name, URL or tar file'
ignore_errors_help = 'Ignore errors and continue with the next specified role.'
install_parser = parser.add_parser('install', parents=parents,
help='Install {0}(s) from file(s), URL(s) or Ansible '
'Galaxy'.format(galaxy_type))
install_parser.set_defaults(func=self.execute_install)
install_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', **args_kwargs)
install_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help=ignore_errors_help)
install_exclusive = install_parser.add_mutually_exclusive_group()
install_exclusive.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download {0}s listed as dependencies.".format(galaxy_type))
install_exclusive.add_argument('--force-with-deps', dest='force_with_deps', action='store_true', default=False,
help="Force overwriting an existing {0} and its "
"dependencies.".format(galaxy_type))
if galaxy_type == 'collection':
install_parser.add_argument('-p', '--collections-path', dest='collections_path',
default=C.COLLECTIONS_PATHS[0],
help='The path to the directory containing your collections.')
install_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be installed.')
install_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
else:
install_parser.add_argument('-r', '--role-file', dest='role_file',
help='A file containing a list of roles to be imported.')
install_parser.add_argument('-g', '--keep-scm-meta', dest='keep_scm_meta', action='store_true',
default=False,
help='Use tar instead of the scm archive option when packaging the role.')
def add_build_options(self, parser, parents=None):
build_parser = parser.add_parser('build', parents=parents,
help='Build an Ansible collection artifact that can be publish to Ansible '
'Galaxy.')
build_parser.set_defaults(func=self.execute_build)
build_parser.add_argument('args', metavar='collection', nargs='*', default=('.',),
help='Path to the collection(s) directory to build. This should be the directory '
'that contains the galaxy.yml file. The default is the current working '
'directory.')
build_parser.add_argument('--output-path', dest='output_path', default='./',
help='The path in which the collection is built to. The default is the current '
'working directory.')
def add_publish_options(self, parser, parents=None):
publish_parser = parser.add_parser('publish', parents=parents,
help='Publish a collection artifact to Ansible Galaxy.')
publish_parser.set_defaults(func=self.execute_publish)
publish_parser.add_argument('args', metavar='collection_path',
help='The path to the collection tarball to publish.')
publish_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import validation results.")
publish_parser.add_argument('--import-timeout', dest='import_timeout', type=int, default=0,
help="The time to wait for the collection import process to finish.")
def post_process_args(self, options):
options = super(GalaxyCLI, self).post_process_args(options)
display.verbosity = options.verbosity
return options
def run(self):
super(GalaxyCLI, self).run()
self.galaxy = Galaxy()
def server_config_def(section, key, required):
return {
'description': 'The %s of the %s Galaxy server' % (key, section),
'ini': [
{
'section': 'galaxy_server.%s' % section,
'key': key,
}
],
'env': [
{'name': 'ANSIBLE_GALAXY_SERVER_%s_%s' % (section.upper(), key.upper())},
],
'required': required,
}
server_def = [('url', True), ('username', False), ('password', False), ('token', False),
('auth_url', False)]
validate_certs = not context.CLIARGS['ignore_certs']
config_servers = []
# Need to filter out empty strings or non truthy values as an empty server list env var is equal to [''].
server_list = [s for s in C.GALAXY_SERVER_LIST or [] if s]
for server_key in server_list:
# Config definitions are looked up dynamically based on the C.GALAXY_SERVER_LIST entry. We look up the
# section [galaxy_server.<server>] for the values url, username, password, and token.
config_dict = dict((k, server_config_def(server_key, k, req)) for k, req in server_def)
defs = AnsibleLoader(yaml.safe_dump(config_dict)).get_single_data()
C.config.initialize_plugin_configuration_definitions('galaxy_server', server_key, defs)
server_options = C.config.get_plugin_options('galaxy_server', server_key)
# auth_url is used to create the token, but not directly by GalaxyAPI, so
# it doesn't need to be passed as kwarg to GalaxyApi
auth_url = server_options.pop('auth_url', None)
token_val = server_options['token'] or NoTokenSentinel
username = server_options['username']
# default case if no auth info is provided.
server_options['token'] = None
if username:
server_options['token'] = BasicAuthToken(username,
server_options['password'])
else:
if token_val:
if auth_url:
server_options['token'] = KeycloakToken(access_token=token_val,
auth_url=auth_url,
validate_certs=validate_certs)
else:
# The galaxy v1 / github / django / 'Token'
server_options['token'] = GalaxyToken(token=token_val)
server_options['validate_certs'] = validate_certs
config_servers.append(GalaxyAPI(self.galaxy, server_key, **server_options))
cmd_server = context.CLIARGS['api_server']
cmd_token = GalaxyToken(token=context.CLIARGS['api_key'])
if cmd_server:
# Cmd args take precedence over the config entry but fist check if the arg was a name and use that config
# entry, otherwise create a new API entry for the server specified.
config_server = next((s for s in config_servers if s.name == cmd_server), None)
if config_server:
self.api_servers.append(config_server)
else:
self.api_servers.append(GalaxyAPI(self.galaxy, 'cmd_arg', cmd_server, token=cmd_token,
validate_certs=validate_certs))
else:
self.api_servers = config_servers
# Default to C.GALAXY_SERVER if no servers were defined
if len(self.api_servers) == 0:
self.api_servers.append(GalaxyAPI(self.galaxy, 'default', C.GALAXY_SERVER, token=cmd_token,
validate_certs=validate_certs))
context.CLIARGS['func']()
@property
def api(self):
return self.api_servers[0]
def _parse_requirements_file(self, requirements_file, allow_old_format=True):
"""
Parses an Ansible requirement.yml file and returns all the roles and/or collections defined in it. There are 2
requirements file format:
# v1 (roles only)
- src: The source of the role, required if include is not set. Can be Galaxy role name, URL to a SCM repo or tarball.
name: Downloads the role to the specified name, defaults to Galaxy name from Galaxy or name of repo if src is a URL.
scm: If src is a URL, specify the SCM. Only git or hd are supported and defaults ot git.
version: The version of the role to download. Can also be tag, commit, or branch name and defaults to master.
include: Path to additional requirements.yml files.
# v2 (roles and collections)
---
roles:
# Same as v1 format just under the roles key
collections:
- namespace.collection
- name: namespace.collection
version: version identifier, multiple identifiers are separated by ','
source: the URL or a predefined source name that relates to C.GALAXY_SERVER_LIST
:param requirements_file: The path to the requirements file.
:param allow_old_format: Will fail if a v1 requirements file is found and this is set to False.
:return: a dict containing roles and collections to found in the requirements file.
"""
requirements = {
'roles': [],
'collections': [],
}
b_requirements_file = to_bytes(requirements_file, errors='surrogate_or_strict')
if not os.path.exists(b_requirements_file):
raise AnsibleError("The requirements file '%s' does not exist." % to_native(requirements_file))
display.vvv("Reading requirement file at '%s'" % requirements_file)
with open(b_requirements_file, 'rb') as req_obj:
try:
file_requirements = yaml.safe_load(req_obj)
except YAMLError as err:
raise AnsibleError(
"Failed to parse the requirements yml at '%s' with the following error:\n%s"
% (to_native(requirements_file), to_native(err)))
if file_requirements is None:
raise AnsibleError("No requirements found in file '%s'" % to_native(requirements_file))
def parse_role_req(requirement):
if "include" not in requirement:
role = RoleRequirement.role_yaml_parse(requirement)
display.vvv("found role %s in yaml file" % to_text(role))
if "name" not in role and "src" not in role:
raise AnsibleError("Must specify name or src for role")
return [GalaxyRole(self.galaxy, self.api, **role)]
else:
b_include_path = to_bytes(requirement["include"], errors="surrogate_or_strict")
if not os.path.isfile(b_include_path):
raise AnsibleError("Failed to find include requirements file '%s' in '%s'"
% (to_native(b_include_path), to_native(requirements_file)))
with open(b_include_path, 'rb') as f_include:
try:
return [GalaxyRole(self.galaxy, self.api, **r) for r in
(RoleRequirement.role_yaml_parse(i) for i in yaml.safe_load(f_include))]
except Exception as e:
raise AnsibleError("Unable to load data from include requirements file: %s %s"
% (to_native(requirements_file), to_native(e)))
if isinstance(file_requirements, list):
# Older format that contains only roles
if not allow_old_format:
raise AnsibleError("Expecting requirements file to be a dict with the key 'collections' that contains "
"a list of collections to install")
for role_req in file_requirements:
requirements['roles'] += parse_role_req(role_req)
else:
# Newer format with a collections and/or roles key
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
if extra_keys:
raise AnsibleError("Expecting only 'roles' and/or 'collections' as base keys in the requirements "
"file. Found: %s" % (to_native(", ".join(extra_keys))))
for role_req in file_requirements.get('roles') or []:
requirements['roles'] += parse_role_req(role_req)
for collection_req in file_requirements.get('collections') or []:
if isinstance(collection_req, dict):
req_name = collection_req.get('name', None)
if req_name is None:
raise AnsibleError("Collections requirement entry should contain the key name.")
req_version = collection_req.get('version', '*')
req_source = collection_req.get('source', None)
if req_source:
# Try and match up the requirement source with our list of Galaxy API servers defined in the
# config, otherwise create a server with that URL without any auth.
req_source = next(iter([a for a in self.api_servers if req_source in [a.name, a.api_server]]),
GalaxyAPI(self.galaxy,
"explicit_requirement_%s" % req_name,
req_source,
validate_certs=not context.CLIARGS['ignore_certs']))
requirements['collections'].append((req_name, req_version, req_source))
else:
requirements['collections'].append((collection_req, '*', None))
return requirements
@staticmethod
def exit_without_ignore(rc=1):
"""
Exits with the specified return code unless the
option --ignore-errors was specified
"""
if not context.CLIARGS['ignore_errors']:
raise AnsibleError('- you can use --ignore-errors to skip failed roles and finish processing the list.')
@staticmethod
def _display_role_info(role_info):
text = [u"", u"Role: %s" % to_text(role_info['name'])]
# Get the top-level 'description' first, falling back to galaxy_info['galaxy_info']['description'].
galaxy_info = role_info.get('galaxy_info', {})
description = role_info.get('description', galaxy_info.get('description', ''))
text.append(u"\tdescription: %s" % description)
for k in sorted(role_info.keys()):
if k in GalaxyCLI.SKIP_INFO_KEYS:
continue
if isinstance(role_info[k], dict):
text.append(u"\t%s:" % (k))
for key in sorted(role_info[k].keys()):
if key in GalaxyCLI.SKIP_INFO_KEYS:
continue
text.append(u"\t\t%s: %s" % (key, role_info[k][key]))
else:
text.append(u"\t%s: %s" % (k, role_info[k]))
return u'\n'.join(text)
@staticmethod
def _resolve_path(path):
return os.path.abspath(os.path.expanduser(os.path.expandvars(path)))
@staticmethod
def _get_skeleton_galaxy_yml(template_path, inject_data):
with open(to_bytes(template_path, errors='surrogate_or_strict'), 'rb') as template_obj:
meta_template = to_text(template_obj.read(), errors='surrogate_or_strict')
galaxy_meta = get_collections_galaxy_meta_info()
required_config = []
optional_config = []
for meta_entry in galaxy_meta:
config_list = required_config if meta_entry.get('required', False) else optional_config
value = inject_data.get(meta_entry['key'], None)
if not value:
meta_type = meta_entry.get('type', 'str')
if meta_type == 'str':
value = ''
elif meta_type == 'list':
value = []
elif meta_type == 'dict':
value = {}
meta_entry['value'] = value
config_list.append(meta_entry)
link_pattern = re.compile(r"L\(([^)]+),\s+([^)]+)\)")
const_pattern = re.compile(r"C\(([^)]+)\)")
def comment_ify(v):
if isinstance(v, list):
v = ". ".join([l.rstrip('.') for l in v])
v = link_pattern.sub(r"\1 <\2>", v)
v = const_pattern.sub(r"'\1'", v)
return textwrap.fill(v, width=117, initial_indent="# ", subsequent_indent="# ", break_on_hyphens=False)
loader = DataLoader()
templar = Templar(loader, variables={'required_config': required_config, 'optional_config': optional_config})
templar.environment.filters['comment_ify'] = comment_ify
meta_value = templar.template(meta_template)
return meta_value
def _require_one_of_collections_requirements(self, collections, requirements_file):
if collections and requirements_file:
raise AnsibleError("The positional collection_name arg and --requirements-file are mutually exclusive.")
elif not collections and not requirements_file:
raise AnsibleError("You must specify a collection name or a requirements file.")
elif requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._parse_requirements_file(requirements_file, allow_old_format=False)['collections']
else:
requirements = []
for collection_input in collections:
requirement = None
if os.path.isfile(to_bytes(collection_input, errors='surrogate_or_strict')) or \
urlparse(collection_input).scheme.lower() in ['http', 'https']:
# Arg is a file path or URL to a collection
name = collection_input
else:
name, dummy, requirement = collection_input.partition(':')
requirements.append((name, requirement or '*', None))
return requirements
############################
# execute actions
############################
def execute_role(self):
"""
Perform the action on an Ansible Galaxy role. Must be combined with a further action like delete/install/init
as listed below.
"""
# To satisfy doc build
pass
def execute_collection(self):
"""
Perform the action on an Ansible Galaxy collection. Must be combined with a further action like init/install as
listed below.
"""
# To satisfy doc build
pass
def execute_build(self):
"""
Build an Ansible Galaxy collection artifact that can be stored in a central repository like Ansible Galaxy.
By default, this command builds from the current working directory. You can optionally pass in the
collection input path (where the ``galaxy.yml`` file is).
"""
force = context.CLIARGS['force']
output_path = GalaxyCLI._resolve_path(context.CLIARGS['output_path'])
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
elif os.path.isfile(b_output_path):
raise AnsibleError("- the output collection directory %s is a file - aborting" % to_native(output_path))
for collection_path in context.CLIARGS['args']:
collection_path = GalaxyCLI._resolve_path(collection_path)
build_collection(collection_path, output_path, force)
def execute_download(self):
collections = context.CLIARGS['args']
no_deps = context.CLIARGS['no_deps']
download_path = context.CLIARGS['download_path']
ignore_certs = context.CLIARGS['ignore_certs']
requirements_file = context.CLIARGS['requirements']
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._require_one_of_collections_requirements(collections, requirements_file)
download_path = GalaxyCLI._resolve_path(download_path)
b_download_path = to_bytes(download_path, errors='surrogate_or_strict')
if not os.path.exists(b_download_path):
os.makedirs(b_download_path)
download_collections(requirements, download_path, self.api_servers, (not ignore_certs), no_deps,
context.CLIARGS['allow_pre_release'])
return 0
def execute_init(self):
"""
Creates the skeleton framework of a role or collection that complies with the Galaxy metadata format.
Requires a role or collection name. The collection name must be in the format ``<namespace>.<collection>``.
"""
galaxy_type = context.CLIARGS['type']
init_path = context.CLIARGS['init_path']
force = context.CLIARGS['force']
obj_skeleton = context.CLIARGS['{0}_skeleton'.format(galaxy_type)]
obj_name = context.CLIARGS['{0}_name'.format(galaxy_type)]
inject_data = dict(
description='your {0} description'.format(galaxy_type),
ansible_plugin_list_dir=get_versioned_doclink('plugins/plugins.html'),
)
if galaxy_type == 'role':
inject_data.update(dict(
author='your name',
company='your company (optional)',
license='license (GPL-2.0-or-later, MIT, etc)',
role_name=obj_name,
role_type=context.CLIARGS['role_type'],
issue_tracker_url='http://example.com/issue/tracker',
repository_url='http://example.com/repository',
documentation_url='http://docs.example.com',
homepage_url='http://example.com',
min_ansible_version=ansible_version[:3], # x.y
dependencies=[],
))
obj_path = os.path.join(init_path, obj_name)
elif galaxy_type == 'collection':
namespace, collection_name = obj_name.split('.', 1)
inject_data.update(dict(
namespace=namespace,
collection_name=collection_name,
version='1.0.0',
readme='README.md',
authors=['your name <[email protected]>'],
license=['GPL-2.0-or-later'],
repository='http://example.com/repository',
documentation='http://docs.example.com',
homepage='http://example.com',
issues='http://example.com/issue/tracker',
build_ignore=[],
))
obj_path = os.path.join(init_path, namespace, collection_name)
b_obj_path = to_bytes(obj_path, errors='surrogate_or_strict')
if os.path.exists(b_obj_path):
if os.path.isfile(obj_path):
raise AnsibleError("- the path %s already exists, but is a file - aborting" % to_native(obj_path))
elif not force:
raise AnsibleError("- the directory %s already exists. "
"You can use --force to re-initialize this directory,\n"
"however it will reset any main.yml files that may have\n"
"been modified there already." % to_native(obj_path))
if obj_skeleton is not None:
own_skeleton = False
skeleton_ignore_expressions = C.GALAXY_ROLE_SKELETON_IGNORE
else:
own_skeleton = True
obj_skeleton = self.galaxy.default_role_skeleton_path
skeleton_ignore_expressions = ['^.*/.git_keep$']
obj_skeleton = os.path.expanduser(obj_skeleton)
skeleton_ignore_re = [re.compile(x) for x in skeleton_ignore_expressions]
if not os.path.exists(obj_skeleton):
raise AnsibleError("- the skeleton path '{0}' does not exist, cannot init {1}".format(
to_native(obj_skeleton), galaxy_type)
)
loader = DataLoader()
templar = Templar(loader, variables=inject_data)
# create role directory
if not os.path.exists(b_obj_path):
os.makedirs(b_obj_path)
for root, dirs, files in os.walk(obj_skeleton, topdown=True):
rel_root = os.path.relpath(root, obj_skeleton)
rel_dirs = rel_root.split(os.sep)
rel_root_dir = rel_dirs[0]
if galaxy_type == 'collection':
# A collection can contain templates in playbooks/*/templates and roles/*/templates
in_templates_dir = rel_root_dir in ['playbooks', 'roles'] and 'templates' in rel_dirs
else:
in_templates_dir = rel_root_dir == 'templates'
dirs = [d for d in dirs if not any(r.match(d) for r in skeleton_ignore_re)]
for f in files:
filename, ext = os.path.splitext(f)
if any(r.match(os.path.join(rel_root, f)) for r in skeleton_ignore_re):
continue
if galaxy_type == 'collection' and own_skeleton and rel_root == '.' and f == 'galaxy.yml.j2':
# Special use case for galaxy.yml.j2 in our own default collection skeleton. We build the options
# dynamically which requires special options to be set.
# The templated data's keys must match the key name but the inject data contains collection_name
# instead of name. We just make a copy and change the key back to name for this file.
template_data = inject_data.copy()
template_data['name'] = template_data.pop('collection_name')
meta_value = GalaxyCLI._get_skeleton_galaxy_yml(os.path.join(root, rel_root, f), template_data)
b_dest_file = to_bytes(os.path.join(obj_path, rel_root, filename), errors='surrogate_or_strict')
with open(b_dest_file, 'wb') as galaxy_obj:
galaxy_obj.write(to_bytes(meta_value, errors='surrogate_or_strict'))
elif ext == ".j2" and not in_templates_dir:
src_template = os.path.join(root, f)
dest_file = os.path.join(obj_path, rel_root, filename)
template_data = to_text(loader._get_file_contents(src_template)[0], errors='surrogate_or_strict')
b_rendered = to_bytes(templar.template(template_data), errors='surrogate_or_strict')
with open(dest_file, 'wb') as df:
df.write(b_rendered)
else:
f_rel_path = os.path.relpath(os.path.join(root, f), obj_skeleton)
shutil.copyfile(os.path.join(root, f), os.path.join(obj_path, f_rel_path))
for d in dirs:
b_dir_path = to_bytes(os.path.join(obj_path, rel_root, d), errors='surrogate_or_strict')
if not os.path.exists(b_dir_path):
os.makedirs(b_dir_path)
display.display("- %s %s was created successfully" % (galaxy_type.title(), obj_name))
def execute_info(self):
"""
prints out detailed information about an installed role as well as info available from the galaxy API.
"""
roles_path = context.CLIARGS['roles_path']
data = ''
for role in context.CLIARGS['args']:
role_info = {'path': roles_path}
gr = GalaxyRole(self.galaxy, self.api, role)
if not gr._exists:
data = u"- the role %s was not found" % role
break
install_info = gr.install_info
if install_info:
if 'version' in install_info:
install_info['installed_version'] = install_info['version']
del install_info['version']
role_info.update(install_info)
remote_data = False
if not context.CLIARGS['offline']:
remote_data = self.api.lookup_role_by_name(role, False)
if remote_data:
role_info.update(remote_data)
if gr.metadata:
role_info.update(gr.metadata)
req = RoleRequirement()
role_spec = req.role_yaml_parse({'role': role})
if role_spec:
role_info.update(role_spec)
data = self._display_role_info(role_info)
self.pager(data)
def execute_verify(self):
collections = context.CLIARGS['args']
search_paths = context.CLIARGS['collections_path']
ignore_certs = context.CLIARGS['ignore_certs']
ignore_errors = context.CLIARGS['ignore_errors']
requirements_file = context.CLIARGS['requirements']
requirements = self._require_one_of_collections_requirements(collections, requirements_file)
resolved_paths = [validate_collection_path(GalaxyCLI._resolve_path(path)) for path in search_paths]
verify_collections(requirements, resolved_paths, self.api_servers, (not ignore_certs), ignore_errors,
allow_pre_release=True)
return 0
def execute_install(self):
"""
Install one or more roles(``ansible-galaxy role install``), or one or more collections(``ansible-galaxy collection install``).
You can pass in a list (roles or collections) or use the file
option listed below (these are mutually exclusive). If you pass in a list, it
can be a name (which will be downloaded via the galaxy API and github), or it can be a local tar archive file.
"""
if context.CLIARGS['type'] == 'collection':
collections = context.CLIARGS['args']
force = context.CLIARGS['force']
output_path = context.CLIARGS['collections_path']
ignore_certs = context.CLIARGS['ignore_certs']
ignore_errors = context.CLIARGS['ignore_errors']
requirements_file = context.CLIARGS['requirements']
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
if collections and requirements_file:
raise AnsibleError("The positional collection_name arg and --requirements-file are mutually exclusive.")
elif not collections and not requirements_file:
raise AnsibleError("You must specify a collection name or a requirements file.")
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._require_one_of_collections_requirements(collections, requirements_file)
output_path = GalaxyCLI._resolve_path(output_path)
collections_path = C.COLLECTIONS_PATHS
if len([p for p in collections_path if p.startswith(output_path)]) == 0:
display.warning("The specified collections path '%s' is not part of the configured Ansible "
"collections paths '%s'. The installed collection won't be picked up in an Ansible "
"run." % (to_text(output_path), to_text(":".join(collections_path))))
output_path = validate_collection_path(output_path)
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
install_collections(requirements, output_path, self.api_servers, (not ignore_certs), ignore_errors,
no_deps, force, force_deps, context.CLIARGS['allow_pre_release'])
return 0
role_file = context.CLIARGS['role_file']
if not context.CLIARGS['args'] and role_file is None:
# the user needs to specify one of either --role-file or specify a single user/role name
raise AnsibleOptionsError("- you must specify a user/role name or a roles file")
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
force = context.CLIARGS['force'] or force_deps
roles_left = []
if role_file:
if not (role_file.endswith('.yaml') or role_file.endswith('.yml')):
raise AnsibleError("Invalid role requirements file, it must end with a .yml or .yaml extension")
roles_left = self._parse_requirements_file(role_file)['roles']
else:
# roles were specified directly, so we'll just go out grab them
# (and their dependencies, unless the user doesn't want us to).
for rname in context.CLIARGS['args']:
role = RoleRequirement.role_yaml_parse(rname.strip())
roles_left.append(GalaxyRole(self.galaxy, self.api, **role))
for role in roles_left:
# only process roles in roles files when names matches if given
if role_file and context.CLIARGS['args'] and role.name not in context.CLIARGS['args']:
display.vvv('Skipping role %s' % role.name)
continue
display.vvv('Processing role %s ' % role.name)
# query the galaxy API for the role data
if role.install_info is not None:
if role.install_info['version'] != role.version or force:
if force:
display.display('- changing role %s from %s to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
role.remove()
else:
display.warning('- %s (%s) is already installed - use --force to change version to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
continue
else:
if not force:
display.display('- %s is already installed, skipping.' % str(role))
continue
try:
installed = role.install()
except AnsibleError as e:
display.warning(u"- %s was NOT installed successfully: %s " % (role.name, to_text(e)))
self.exit_without_ignore()
continue
# install dependencies, if we want them
if not no_deps and installed:
if not role.metadata:
display.warning("Meta file %s is empty. Skipping dependencies." % role.path)
else:
role_dependencies = (role.metadata.get('dependencies') or []) + role.requirements
for dep in role_dependencies:
display.debug('Installing dep %s' % dep)
dep_req = RoleRequirement()
dep_info = dep_req.role_yaml_parse(dep)
dep_role = GalaxyRole(self.galaxy, self.api, **dep_info)
if '.' not in dep_role.name and '.' not in dep_role.src and dep_role.scm is None:
# we know we can skip this, as it's not going to
# be found on galaxy.ansible.com
continue
if dep_role.install_info is None:
if dep_role not in roles_left:
display.display('- adding dependency: %s' % to_text(dep_role))
roles_left.append(dep_role)
else:
display.display('- dependency %s already pending installation.' % dep_role.name)
else:
if dep_role.install_info['version'] != dep_role.version:
if force_deps:
display.display('- changing dependant role %s from %s to %s' %
(dep_role.name, dep_role.install_info['version'], dep_role.version or "unspecified"))
dep_role.remove()
roles_left.append(dep_role)
else:
display.warning('- dependency %s (%s) from role %s differs from already installed version (%s), skipping' %
(to_text(dep_role), dep_role.version, role.name, dep_role.install_info['version']))
else:
if force_deps:
roles_left.append(dep_role)
else:
display.display('- dependency %s is already installed, skipping.' % dep_role.name)
if not installed:
display.warning("- %s was NOT installed successfully." % role.name)
self.exit_without_ignore()
return 0
def execute_remove(self):
"""
removes the list of roles passed as arguments from the local system.
"""
if not context.CLIARGS['args']:
raise AnsibleOptionsError('- you must specify at least one role to remove.')
for role_name in context.CLIARGS['args']:
role = GalaxyRole(self.galaxy, self.api, role_name)
try:
if role.remove():
display.display('- successfully removed %s' % role_name)
else:
display.display('- %s is not installed, skipping.' % role_name)
except Exception as e:
raise AnsibleError("Failed to remove role %s: %s" % (role_name, to_native(e)))
return 0
def execute_list(self):
"""
List installed collections or roles
"""
if context.CLIARGS['type'] == 'role':
self.execute_list_role()
elif context.CLIARGS['type'] == 'collection':
self.execute_list_collection()
def execute_list_role(self):
"""
List all roles installed on the local system or a specific role
"""
path_found = False
role_found = False
warnings = []
roles_search_paths = context.CLIARGS['roles_path']
role_name = context.CLIARGS['role']
for path in roles_search_paths:
role_path = GalaxyCLI._resolve_path(path)
if os.path.isdir(path):
path_found = True
else:
warnings.append("- the configured path {0} does not exist.".format(path))
continue
if role_name:
# show the requested role, if it exists
gr = GalaxyRole(self.galaxy, self.api, role_name, path=os.path.join(role_path, role_name))
if os.path.isdir(gr.path):
role_found = True
display.display('# %s' % os.path.dirname(gr.path))
_display_role(gr)
break
warnings.append("- the role %s was not found" % role_name)
else:
if not os.path.exists(role_path):
warnings.append("- the configured path %s does not exist." % role_path)
continue
if not os.path.isdir(role_path):
warnings.append("- the configured path %s, exists, but it is not a directory." % role_path)
continue
display.display('# %s' % role_path)
path_files = os.listdir(role_path)
for path_file in path_files:
gr = GalaxyRole(self.galaxy, self.api, path_file, path=path)
if gr.metadata:
_display_role(gr)
# Do not warn if the role was found in any of the search paths
if role_found and role_name:
warnings = []
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError("- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type']))
return 0
def execute_list_collection(self):
"""
List all collections installed on the local system
"""
collections_search_paths = set(context.CLIARGS['collections_path'])
collection_name = context.CLIARGS['collection']
default_collections_path = C.config.get_configuration_definition('COLLECTIONS_PATHS').get('default')
warnings = []
path_found = False
collection_found = False
for path in collections_search_paths:
collection_path = GalaxyCLI._resolve_path(path)
if not os.path.exists(path):
if path in default_collections_path:
# don't warn for missing default paths
continue
warnings.append("- the configured path {0} does not exist.".format(collection_path))
continue
if not os.path.isdir(collection_path):
warnings.append("- the configured path {0}, exists, but it is not a directory.".format(collection_path))
continue
path_found = True
if collection_name:
# list a specific collection
validate_collection_name(collection_name)
namespace, collection = collection_name.split('.')
collection_path = validate_collection_path(collection_path)
b_collection_path = to_bytes(os.path.join(collection_path, namespace, collection), errors='surrogate_or_strict')
if not os.path.exists(b_collection_path):
warnings.append("- unable to find {0} in collection paths".format(collection_name))
continue
if not os.path.isdir(collection_path):
warnings.append("- the configured path {0}, exists, but it is not a directory.".format(collection_path))
continue
collection_found = True
collection = CollectionRequirement.from_path(b_collection_path, False, fallback_metadata=True)
fqcn_width, version_width = _get_collection_widths(collection)
_display_header(collection_path, 'Collection', 'Version', fqcn_width, version_width)
_display_collection(collection, fqcn_width, version_width)
else:
# list all collections
collection_path = validate_collection_path(path)
if os.path.isdir(collection_path):
display.vvv("Searching {0} for collections".format(collection_path))
collections = find_existing_collections(collection_path, fallback_metadata=True)
else:
# There was no 'ansible_collections/' directory in the path, so there
# or no collections here.
display.vvv("No 'ansible_collections' directory found at {0}".format(collection_path))
continue
if not collections:
display.vvv("No collections found at {0}".format(collection_path))
continue
# Display header
fqcn_width, version_width = _get_collection_widths(collections)
_display_header(collection_path, 'Collection', 'Version', fqcn_width, version_width)
# Sort collections by the namespace and name
collections.sort(key=to_text)
for collection in collections:
_display_collection(collection, fqcn_width, version_width)
# Do not warn if the specific collection was found in any of the search paths
if collection_found and collection_name:
warnings = []
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError("- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type']))
return 0
def execute_publish(self):
"""
Publish a collection into Ansible Galaxy. Requires the path to the collection tarball to publish.
"""
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['args'])
wait = context.CLIARGS['wait']
timeout = context.CLIARGS['import_timeout']
publish_collection(collection_path, self.api, wait, timeout)
def execute_search(self):
''' searches for roles on the Ansible Galaxy server'''
page_size = 1000
search = None
if context.CLIARGS['args']:
search = '+'.join(context.CLIARGS['args'])
if not search and not context.CLIARGS['platforms'] and not context.CLIARGS['galaxy_tags'] and not context.CLIARGS['author']:
raise AnsibleError("Invalid query. At least one search term, platform, galaxy tag or author must be provided.")
response = self.api.search_roles(search, platforms=context.CLIARGS['platforms'],
tags=context.CLIARGS['galaxy_tags'], author=context.CLIARGS['author'], page_size=page_size)
if response['count'] == 0:
display.display("No roles match your search.", color=C.COLOR_ERROR)
return True
data = [u'']
if response['count'] > page_size:
data.append(u"Found %d roles matching your search. Showing first %s." % (response['count'], page_size))
else:
data.append(u"Found %d roles matching your search:" % response['count'])
max_len = []
for role in response['results']:
max_len.append(len(role['username'] + '.' + role['name']))
name_len = max(max_len)
format_str = u" %%-%ds %%s" % name_len
data.append(u'')
data.append(format_str % (u"Name", u"Description"))
data.append(format_str % (u"----", u"-----------"))
for role in response['results']:
data.append(format_str % (u'%s.%s' % (role['username'], role['name']), role['description']))
data = u'\n'.join(data)
self.pager(data)
return True
def execute_login(self):
"""
verify user's identify via Github and retrieve an auth token from Ansible Galaxy.
"""
# Authenticate with github and retrieve a token
if context.CLIARGS['token'] is None:
if C.GALAXY_TOKEN:
github_token = C.GALAXY_TOKEN
else:
login = GalaxyLogin(self.galaxy)
github_token = login.create_github_token()
else:
github_token = context.CLIARGS['token']
galaxy_response = self.api.authenticate(github_token)
if context.CLIARGS['token'] is None and C.GALAXY_TOKEN is None:
# Remove the token we created
login.remove_github_token()
# Store the Galaxy token
token = GalaxyToken()
token.set(galaxy_response['token'])
display.display("Successfully logged into Galaxy as %s" % galaxy_response['username'])
return 0
def execute_import(self):
""" used to import a role into Ansible Galaxy """
colors = {
'INFO': 'normal',
'WARNING': C.COLOR_WARN,
'ERROR': C.COLOR_ERROR,
'SUCCESS': C.COLOR_OK,
'FAILED': C.COLOR_ERROR,
}
github_user = to_text(context.CLIARGS['github_user'], errors='surrogate_or_strict')
github_repo = to_text(context.CLIARGS['github_repo'], errors='surrogate_or_strict')
if context.CLIARGS['check_status']:
task = self.api.get_import_task(github_user=github_user, github_repo=github_repo)
else:
# Submit an import request
task = self.api.create_import_task(github_user, github_repo,
reference=context.CLIARGS['reference'],
role_name=context.CLIARGS['role_name'])
if len(task) > 1:
# found multiple roles associated with github_user/github_repo
display.display("WARNING: More than one Galaxy role associated with Github repo %s/%s." % (github_user, github_repo),
color='yellow')
display.display("The following Galaxy roles are being updated:" + u'\n', color=C.COLOR_CHANGED)
for t in task:
display.display('%s.%s' % (t['summary_fields']['role']['namespace'], t['summary_fields']['role']['name']), color=C.COLOR_CHANGED)
display.display(u'\nTo properly namespace this role, remove each of the above and re-import %s/%s from scratch' % (github_user, github_repo),
color=C.COLOR_CHANGED)
return 0
# found a single role as expected
display.display("Successfully submitted import request %d" % task[0]['id'])
if not context.CLIARGS['wait']:
display.display("Role name: %s" % task[0]['summary_fields']['role']['name'])
display.display("Repo: %s/%s" % (task[0]['github_user'], task[0]['github_repo']))
if context.CLIARGS['check_status'] or context.CLIARGS['wait']:
# Get the status of the import
msg_list = []
finished = False
while not finished:
task = self.api.get_import_task(task_id=task[0]['id'])
for msg in task[0]['summary_fields']['task_messages']:
if msg['id'] not in msg_list:
display.display(msg['message_text'], color=colors[msg['message_type']])
msg_list.append(msg['id'])
if task[0]['state'] in ['SUCCESS', 'FAILED']:
finished = True
else:
time.sleep(10)
return 0
def execute_setup(self):
""" Setup an integration from Github or Travis for Ansible Galaxy roles"""
if context.CLIARGS['setup_list']:
# List existing integration secrets
secrets = self.api.list_secrets()
if len(secrets) == 0:
# None found
display.display("No integrations found.")
return 0
display.display(u'\n' + "ID Source Repo", color=C.COLOR_OK)
display.display("---------- ---------- ----------", color=C.COLOR_OK)
for secret in secrets:
display.display("%-10s %-10s %s/%s" % (secret['id'], secret['source'], secret['github_user'],
secret['github_repo']), color=C.COLOR_OK)
return 0
if context.CLIARGS['remove_id']:
# Remove a secret
self.api.remove_secret(context.CLIARGS['remove_id'])
display.display("Secret removed. Integrations using this secret will not longer work.", color=C.COLOR_OK)
return 0
source = context.CLIARGS['source']
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
secret = context.CLIARGS['secret']
resp = self.api.add_secret(source, github_user, github_repo, secret)
display.display("Added integration for %s %s/%s" % (resp['source'], resp['github_user'], resp['github_repo']))
return 0
def execute_delete(self):
""" Delete a role from Ansible Galaxy. """
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
resp = self.api.delete_role(github_user, github_repo)
if len(resp['deleted_roles']) > 1:
display.display("Deleted the following roles:")
display.display("ID User Name")
display.display("------ --------------- ----------")
for role in resp['deleted_roles']:
display.display("%-8s %-15s %s" % (role.id, role.namespace, role.name))
display.display(resp['status'])
return True
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,673 |
ansible-galaxy install user experience is disastrous if you use roles and collections
|
##### SUMMARY
If I create one `requirements.yml` file which lists all my Ansible dependencies for a given project, including roles and collections from Galaxy, and try to install the dependencies using `ansible-galaxy`, it results in an unexpected and unintuitive behavior.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
```
ansible 2.9.5
config file = /Users/jgeerling/Downloads/blend-test/ansible.cfg
configured module search path = ['/Users/jgeerling/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.6 (default, Feb 9 2020, 13:28:08) [Clang 11.0.0 (clang-1100.0.33.17)]
```
##### CONFIGURATION
```
ANSIBLE_NOCOWS(/Users/jgeerling/Downloads/blend-test/ansible.cfg) = True
COLLECTIONS_PATHS(/Users/jgeerling/Downloads/blend-test/ansible.cfg) = ['/Users/jgeerling/Downloads/blend-test']
DEFAULT_ROLES_PATH(/Users/jgeerling/Downloads/blend-test/ansible.cfg) = ['/Users/jgeerling/Downloads/blend-test/roles']
```
##### OS / ENVIRONMENT
- macOS Catalina and Ubuntu
##### STEPS TO REPRODUCE
Create a new playbook project directory (e.g. 'example'). Inside the directory, create a `requirements.yml` file with the contents:
```yaml
---
roles:
# Install a role from Ansible Galaxy.
- name: geerlingguy.java
version: 1.9.7
collections:
# Install a collection from Ansible Galaxy.
- name: geerlingguy.php_roles
version: 0.9.5
source: https://galaxy.ansible.com
```
Run:
ansible-galaxy install -r requirements.yml
##### EXPECTED RESULTS
All my project dependencies are installed as listed in the `requirements.yml` file.
##### ACTUAL RESULTS
```
$ ansible-galaxy install -r requirements.yml
- downloading role 'java', owned by geerlingguy
- downloading role from https://github.com/geerlingguy/ansible-role-java/archive/1.9.7.tar.gz
- extracting geerlingguy.java to /Users/jgeerling/.ansible/roles/geerlingguy.java
- geerlingguy.java (1.9.7) was installed successfully
$ echo $?
0
```
This command works, and results in only the defined _role_ to install. So after being confused as to why the _collection_ doesn't get installed, I read through the Collections documentation and find that, if I want collections installed, I have to run a separate command. To illustrate:
```
# This will only install roles (and give no warning that collections were detected but not installed).
$ ansible-galaxy install -r requirements.yml
# This results in the same behavior as above.
$ ansible-galaxy role install -r requirements.yml
# This will only install collections (and give no warning that roles were detected but not installed).
$ ansible-galaxy collection install -r requirements.yml
```
Because most of my projects will either require roles or roles and collections in the `requirements.yml` file in the coming months/years (and will likely remain that way for the next few years as many roles on Galaxy that I depend on are not making the jump to Collections (especially simpler ones that are minimally maintained), this UX is kind of difficult to stomach, as now I'll need to make sure to have users do _two_ things every time they use or update one of my playbook projects (and CI will also need to run two commands).
It would make more sense to me to have the following behavior:
```
# Installs everything in requirements.yml (collections and roles).
$ ansible-galaxy install -r requirements.yml
# Only installs roles in requirements.yml (displays warning that there are also collections present in the file, if there are any).
$ ansible-galaxy role install -r requirements.yml
# Only installs collections in requirements.yml (displays warning that there are also roles present in the file, if there are any).
$ ansible-galaxy collection install -r requirements.yml
```
|
https://github.com/ansible/ansible/issues/65673
|
https://github.com/ansible/ansible/pull/67843
|
01e7915b0a9778a934a0f0e9e9d110dbef7e31ec
|
ecea15c508f0e081525be036cf76bbb56dbcdd9d
| 2019-12-09T19:48:23Z |
python
| 2020-05-18T19:09:42Z |
test/integration/targets/ansible-galaxy-collection/tasks/download.yml
|
---
- name: create test download dir
file:
path: '{{ galaxy_dir }}/download'
state: directory
- name: download collection with multiple dependencies
command: ansible-galaxy collection download parent_dep.parent_collection -s {{ fallaxy_galaxy_server }}
register: download_collection
args:
chdir: '{{ galaxy_dir }}/download'
- name: get result of download collection with multiple dependencies
find:
path: '{{ galaxy_dir }}/download/collections'
file_type: file
register: download_collection_actual
- name: assert download collection with multiple dependencies
assert:
that:
- '"Downloading collection ''parent_dep.parent_collection'' to" in download_collection.stdout'
- '"Downloading collection ''child_dep.child_collection'' to" in download_collection.stdout'
- '"Downloading collection ''child_dep.child_dep2'' to" in download_collection.stdout'
- download_collection_actual.examined == 4
- download_collection_actual.matched == 4
- (download_collection_actual.files[0].path | basename) in ['requirements.yml', 'child_dep-child_dep2-1.2.2.tar.gz', 'child_dep-child_collection-0.9.9.tar.gz', 'parent_dep-parent_collection-1.0.0.tar.gz']
- (download_collection_actual.files[1].path | basename) in ['requirements.yml', 'child_dep-child_dep2-1.2.2.tar.gz', 'child_dep-child_collection-0.9.9.tar.gz', 'parent_dep-parent_collection-1.0.0.tar.gz']
- (download_collection_actual.files[2].path | basename) in ['requirements.yml', 'child_dep-child_dep2-1.2.2.tar.gz', 'child_dep-child_collection-0.9.9.tar.gz', 'parent_dep-parent_collection-1.0.0.tar.gz']
- (download_collection_actual.files[3].path | basename) in ['requirements.yml', 'child_dep-child_dep2-1.2.2.tar.gz', 'child_dep-child_collection-0.9.9.tar.gz', 'parent_dep-parent_collection-1.0.0.tar.gz']
- name: test install of download requirements file
command: ansible-galaxy collection install -r requirements.yml -p '{{ galaxy_dir }}/download'
args:
chdir: '{{ galaxy_dir }}/download/collections'
register: install_download
- name: get result of test install of download requirements file
slurp:
path: '{{ galaxy_dir }}/download/ansible_collections/{{ collection.namespace }}/{{ collection.name }}/MANIFEST.json'
register: install_download_actual
loop_control:
loop_var: collection
loop:
- namespace: parent_dep
name: parent_collection
- namespace: child_dep
name: child_collection
- namespace: child_dep
name: child_dep2
- name: assert test install of download requirements file
assert:
that:
- '"Installing ''parent_dep.parent_collection:1.0.0'' to" in install_download.stdout'
- '"Installing ''child_dep.child_collection:0.9.9'' to" in install_download.stdout'
- '"Installing ''child_dep.child_dep2:1.2.2'' to" in install_download.stdout'
- (install_download_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_download_actual.results[1].content | b64decode | from_json).collection_info.version == '0.9.9'
- (install_download_actual.results[2].content | b64decode | from_json).collection_info.version == '1.2.2'
- name: create test requirements file for download
copy:
content: |
collections:
- name: namespace1.name1
version: 1.1.0-beta.1
dest: '{{ galaxy_dir }}/download/download.yml'
- name: download collection with req to custom dir
command: ansible-galaxy collection download -r '{{ galaxy_dir }}/download/download.yml' -s {{ fallaxy_ah_server }} -p '{{ galaxy_dir }}/download/collections-custom'
register: download_req_custom_path
- name: get result of download collection with req to custom dir
find:
path: '{{ galaxy_dir }}/download/collections-custom'
file_type: file
register: download_req_custom_path_actual
- name: assert download collection with multiple dependencies
assert:
that:
- '"Downloading collection ''namespace1.name1'' to" in download_req_custom_path.stdout'
- download_req_custom_path_actual.examined == 2
- download_req_custom_path_actual.matched == 2
- (download_req_custom_path_actual.files[0].path | basename) in ['requirements.yml', 'namespace1-name1-1.1.0-beta.1.tar.gz']
- (download_req_custom_path_actual.files[1].path | basename) in ['requirements.yml', 'namespace1-name1-1.1.0-beta.1.tar.gz']
# https://github.com/ansible/ansible/issues/68186
- name: create test requirements file without roles and collections
copy:
content: |
collections:
roles:
dest: '{{ galaxy_dir }}/download/no_roles_no_collections.yml'
- name: install collection with requirements
command: ansible-galaxy collection install -r '{{ galaxy_dir }}/download/no_roles_no_collections.yml'
register: install_no_requirements
- name: assert install collection with no roles and no collections in requirements
assert:
that:
- '"Process install" in install_no_requirements.stdout'
- '"Starting collection" in install_no_requirements.stdout'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,673 |
ansible-galaxy install user experience is disastrous if you use roles and collections
|
##### SUMMARY
If I create one `requirements.yml` file which lists all my Ansible dependencies for a given project, including roles and collections from Galaxy, and try to install the dependencies using `ansible-galaxy`, it results in an unexpected and unintuitive behavior.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
```
ansible 2.9.5
config file = /Users/jgeerling/Downloads/blend-test/ansible.cfg
configured module search path = ['/Users/jgeerling/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.6 (default, Feb 9 2020, 13:28:08) [Clang 11.0.0 (clang-1100.0.33.17)]
```
##### CONFIGURATION
```
ANSIBLE_NOCOWS(/Users/jgeerling/Downloads/blend-test/ansible.cfg) = True
COLLECTIONS_PATHS(/Users/jgeerling/Downloads/blend-test/ansible.cfg) = ['/Users/jgeerling/Downloads/blend-test']
DEFAULT_ROLES_PATH(/Users/jgeerling/Downloads/blend-test/ansible.cfg) = ['/Users/jgeerling/Downloads/blend-test/roles']
```
##### OS / ENVIRONMENT
- macOS Catalina and Ubuntu
##### STEPS TO REPRODUCE
Create a new playbook project directory (e.g. 'example'). Inside the directory, create a `requirements.yml` file with the contents:
```yaml
---
roles:
# Install a role from Ansible Galaxy.
- name: geerlingguy.java
version: 1.9.7
collections:
# Install a collection from Ansible Galaxy.
- name: geerlingguy.php_roles
version: 0.9.5
source: https://galaxy.ansible.com
```
Run:
ansible-galaxy install -r requirements.yml
##### EXPECTED RESULTS
All my project dependencies are installed as listed in the `requirements.yml` file.
##### ACTUAL RESULTS
```
$ ansible-galaxy install -r requirements.yml
- downloading role 'java', owned by geerlingguy
- downloading role from https://github.com/geerlingguy/ansible-role-java/archive/1.9.7.tar.gz
- extracting geerlingguy.java to /Users/jgeerling/.ansible/roles/geerlingguy.java
- geerlingguy.java (1.9.7) was installed successfully
$ echo $?
0
```
This command works, and results in only the defined _role_ to install. So after being confused as to why the _collection_ doesn't get installed, I read through the Collections documentation and find that, if I want collections installed, I have to run a separate command. To illustrate:
```
# This will only install roles (and give no warning that collections were detected but not installed).
$ ansible-galaxy install -r requirements.yml
# This results in the same behavior as above.
$ ansible-galaxy role install -r requirements.yml
# This will only install collections (and give no warning that roles were detected but not installed).
$ ansible-galaxy collection install -r requirements.yml
```
Because most of my projects will either require roles or roles and collections in the `requirements.yml` file in the coming months/years (and will likely remain that way for the next few years as many roles on Galaxy that I depend on are not making the jump to Collections (especially simpler ones that are minimally maintained), this UX is kind of difficult to stomach, as now I'll need to make sure to have users do _two_ things every time they use or update one of my playbook projects (and CI will also need to run two commands).
It would make more sense to me to have the following behavior:
```
# Installs everything in requirements.yml (collections and roles).
$ ansible-galaxy install -r requirements.yml
# Only installs roles in requirements.yml (displays warning that there are also collections present in the file, if there are any).
$ ansible-galaxy role install -r requirements.yml
# Only installs collections in requirements.yml (displays warning that there are also roles present in the file, if there are any).
$ ansible-galaxy collection install -r requirements.yml
```
|
https://github.com/ansible/ansible/issues/65673
|
https://github.com/ansible/ansible/pull/67843
|
01e7915b0a9778a934a0f0e9e9d110dbef7e31ec
|
ecea15c508f0e081525be036cf76bbb56dbcdd9d
| 2019-12-09T19:48:23Z |
python
| 2020-05-18T19:09:42Z |
test/integration/targets/ansible-galaxy-collection/tasks/install.yml
|
---
- name: create test collection install directory - {{ test_name }}
file:
path: '{{ galaxy_dir }}/ansible_collections'
state: directory
- name: install simple collection with implicit path - {{ test_name }}
command: ansible-galaxy collection install namespace1.name1 -s '{{ test_server }}'
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
register: install_normal
- name: get installed files of install simple collection with implicit path - {{ test_name }}
find:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
file_type: file
register: install_normal_files
- name: get the manifest of install simple collection with implicit path - {{ test_name }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_normal_manifest
- name: assert install simple collection with implicit path - {{ test_name }}
assert:
that:
- '"Installing ''namespace1.name1:1.0.9'' to" in install_normal.stdout'
- install_normal_files.files | length == 3
- install_normal_files.files[0].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[1].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[2].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- (install_normal_manifest.content | b64decode | from_json).collection_info.version == '1.0.9'
- name: install existing without --force - {{ test_name }}
command: ansible-galaxy collection install namespace1.name1 -s '{{ test_server }}'
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
register: install_existing_no_force
- name: assert install existing without --force - {{ test_name }}
assert:
that:
- '"Skipping ''namespace1.name1'' as it is already installed" in install_existing_no_force.stdout'
- name: install existing with --force - {{ test_name }}
command: ansible-galaxy collection install namespace1.name1 -s '{{ test_server }}' --force
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
register: install_existing_force
- name: assert install existing with --force - {{ test_name }}
assert:
that:
- '"Installing ''namespace1.name1:1.0.9'' to" in install_existing_force.stdout'
- name: remove test installed collection - {{ test_name }}
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1'
state: absent
- name: install pre-release as explicit version to custom dir - {{ test_name }}
command: ansible-galaxy collection install 'namespace1.name1:1.1.0-beta.1' -s '{{ test_server }}' -p '{{ galaxy_dir }}/ansible_collections'
register: install_prerelease
- name: get result of install pre-release as explicit version to custom dir - {{ test_name }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_prerelease_actual
- name: assert install pre-release as explicit version to custom dir - {{ test_name }}
assert:
that:
- '"Installing ''namespace1.name1:1.1.0-beta.1'' to" in install_prerelease.stdout'
- (install_prerelease_actual.content | b64decode | from_json).collection_info.version == '1.1.0-beta.1'
- name: Remove beta
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
state: absent
- name: install pre-release version with --pre to custom dir - {{ test_name }}
command: ansible-galaxy collection install --pre 'namespace1.name1' -s '{{ test_server }}' -p '{{ galaxy_dir }}/ansible_collections'
register: install_prerelease
- name: get result of install pre-release version with --pre to custom dir - {{ test_name }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_prerelease_actual
- name: assert install pre-release version with --pre to custom dir - {{ test_name }}
assert:
that:
- '"Installing ''namespace1.name1:1.1.0-beta.1'' to" in install_prerelease.stdout'
- (install_prerelease_actual.content | b64decode | from_json).collection_info.version == '1.1.0-beta.1'
- name: install multiple collections with dependencies - {{ test_name }}
command: ansible-galaxy collection install parent_dep.parent_collection namespace2.name -s {{ test_name }}
args:
chdir: '{{ galaxy_dir }}/ansible_collections'
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_CONFIG: '{{ galaxy_dir }}/ansible.cfg'
register: install_multiple_with_dep
- name: get result of install multiple collections with dependencies - {{ test_name }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection.namespace }}/{{ collection.name }}/MANIFEST.json'
register: install_multiple_with_dep_actual
loop_control:
loop_var: collection
loop:
- namespace: namespace2
name: name
- namespace: parent_dep
name: parent_collection
- namespace: child_dep
name: child_collection
- namespace: child_dep
name: child_dep2
- name: assert install multiple collections with dependencies - {{ test_name }}
assert:
that:
- (install_multiple_with_dep_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_multiple_with_dep_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_multiple_with_dep_actual.results[2].content | b64decode | from_json).collection_info.version == '0.9.9'
- (install_multiple_with_dep_actual.results[3].content | b64decode | from_json).collection_info.version == '1.2.2'
- name: expect failure with dep resolution failure
command: ansible-galaxy collection install fail_namespace.fail_collection -s {{ test_server }}
register: fail_dep_mismatch
failed_when: '"Cannot meet dependency requirement ''fail_dep2.name:<0.0.5'' for collection fail_namespace.fail_collection" not in fail_dep_mismatch.stderr'
- name: download a collection for an offline install - {{ test_name }}
get_url:
url: '{{ test_server }}custom/collections/namespace3-name-1.0.0.tar.gz'
dest: '{{ galaxy_dir }}/namespace3.tar.gz'
- name: install a collection from a tarball - {{ test_name }}
command: ansible-galaxy collection install '{{ galaxy_dir }}/namespace3.tar.gz'
register: install_tarball
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
- name: get result of install collection from a tarball - {{ test_name }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace3/name/MANIFEST.json'
register: install_tarball_actual
- name: assert install a collection from a tarball - {{ test_name }}
assert:
that:
- '"Installing ''namespace3.name:1.0.0'' to" in install_tarball.stdout'
- (install_tarball_actual.content | b64decode | from_json).collection_info.version == '1.0.0'
- name: setup bad tarball - {{ test_name }}
script: build_bad_tar.py {{ galaxy_dir | quote }}
- name: fail to install a collection from a bad tarball - {{ test_name }}
command: ansible-galaxy collection install '{{ galaxy_dir }}/suspicious-test-1.0.0.tar.gz'
register: fail_bad_tar
failed_when: fail_bad_tar.rc != 1 and "Cannot extract tar entry '../../outside.sh' as it will be placed outside the collection directory" not in fail_bad_tar.stderr
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
- name: get result of failed collection install - {{ test_name }}
stat:
path: '{{ galaxy_dir }}/ansible_collections\suspicious'
register: fail_bad_tar_actual
- name: assert result of failed collection install - {{ test_name }}
assert:
that:
- not fail_bad_tar_actual.stat.exists
- name: install a collection from a URI - {{ test_name }}
command: ansible-galaxy collection install '{{ test_server }}custom/collections/namespace4-name-1.0.0.tar.gz'
register: install_uri
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
- name: get result of install collection from a URI - {{ test_name }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace4/name/MANIFEST.json'
register: install_uri_actual
- name: assert install a collection from a URI - {{ test_name }}
assert:
that:
- '"Installing ''namespace4.name:1.0.0'' to" in install_uri.stdout'
- (install_uri_actual.content | b64decode | from_json).collection_info.version == '1.0.0'
- name: fail to install a collection with an undefined URL - {{ test_name }}
command: ansible-galaxy collection install namespace5.name
register: fail_undefined_server
failed_when: '"No setting was provided for required configuration plugin_type: galaxy_server plugin: undefined" not in fail_undefined_server.stderr'
environment:
ANSIBLE_GALAXY_SERVER_LIST: undefined
- name: install a collection with an empty server list - {{ test_name }}
command: ansible-galaxy collection install namespace5.name -s '{{ test_server }}'
register: install_empty_server_list
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_GALAXY_SERVER_LIST: ''
- name: get result of a collection with an empty server list - {{ test_name }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace5/name/MANIFEST.json'
register: install_empty_server_list_actual
- name: assert install a collection with an empty server list - {{ test_name }}
assert:
that:
- '"Installing ''namespace5.name:1.0.0'' to" in install_empty_server_list.stdout'
- (install_empty_server_list_actual.content | b64decode | from_json).collection_info.version == '1.0.0'
- name: remove test collection install directory - {{ test_name }}
file:
path: '{{ galaxy_dir }}/ansible_collections'
state: absent
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,673 |
ansible-galaxy install user experience is disastrous if you use roles and collections
|
##### SUMMARY
If I create one `requirements.yml` file which lists all my Ansible dependencies for a given project, including roles and collections from Galaxy, and try to install the dependencies using `ansible-galaxy`, it results in an unexpected and unintuitive behavior.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
```
ansible 2.9.5
config file = /Users/jgeerling/Downloads/blend-test/ansible.cfg
configured module search path = ['/Users/jgeerling/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.6 (default, Feb 9 2020, 13:28:08) [Clang 11.0.0 (clang-1100.0.33.17)]
```
##### CONFIGURATION
```
ANSIBLE_NOCOWS(/Users/jgeerling/Downloads/blend-test/ansible.cfg) = True
COLLECTIONS_PATHS(/Users/jgeerling/Downloads/blend-test/ansible.cfg) = ['/Users/jgeerling/Downloads/blend-test']
DEFAULT_ROLES_PATH(/Users/jgeerling/Downloads/blend-test/ansible.cfg) = ['/Users/jgeerling/Downloads/blend-test/roles']
```
##### OS / ENVIRONMENT
- macOS Catalina and Ubuntu
##### STEPS TO REPRODUCE
Create a new playbook project directory (e.g. 'example'). Inside the directory, create a `requirements.yml` file with the contents:
```yaml
---
roles:
# Install a role from Ansible Galaxy.
- name: geerlingguy.java
version: 1.9.7
collections:
# Install a collection from Ansible Galaxy.
- name: geerlingguy.php_roles
version: 0.9.5
source: https://galaxy.ansible.com
```
Run:
ansible-galaxy install -r requirements.yml
##### EXPECTED RESULTS
All my project dependencies are installed as listed in the `requirements.yml` file.
##### ACTUAL RESULTS
```
$ ansible-galaxy install -r requirements.yml
- downloading role 'java', owned by geerlingguy
- downloading role from https://github.com/geerlingguy/ansible-role-java/archive/1.9.7.tar.gz
- extracting geerlingguy.java to /Users/jgeerling/.ansible/roles/geerlingguy.java
- geerlingguy.java (1.9.7) was installed successfully
$ echo $?
0
```
This command works, and results in only the defined _role_ to install. So after being confused as to why the _collection_ doesn't get installed, I read through the Collections documentation and find that, if I want collections installed, I have to run a separate command. To illustrate:
```
# This will only install roles (and give no warning that collections were detected but not installed).
$ ansible-galaxy install -r requirements.yml
# This results in the same behavior as above.
$ ansible-galaxy role install -r requirements.yml
# This will only install collections (and give no warning that roles were detected but not installed).
$ ansible-galaxy collection install -r requirements.yml
```
Because most of my projects will either require roles or roles and collections in the `requirements.yml` file in the coming months/years (and will likely remain that way for the next few years as many roles on Galaxy that I depend on are not making the jump to Collections (especially simpler ones that are minimally maintained), this UX is kind of difficult to stomach, as now I'll need to make sure to have users do _two_ things every time they use or update one of my playbook projects (and CI will also need to run two commands).
It would make more sense to me to have the following behavior:
```
# Installs everything in requirements.yml (collections and roles).
$ ansible-galaxy install -r requirements.yml
# Only installs roles in requirements.yml (displays warning that there are also collections present in the file, if there are any).
$ ansible-galaxy role install -r requirements.yml
# Only installs collections in requirements.yml (displays warning that there are also roles present in the file, if there are any).
$ ansible-galaxy collection install -r requirements.yml
```
|
https://github.com/ansible/ansible/issues/65673
|
https://github.com/ansible/ansible/pull/67843
|
01e7915b0a9778a934a0f0e9e9d110dbef7e31ec
|
ecea15c508f0e081525be036cf76bbb56dbcdd9d
| 2019-12-09T19:48:23Z |
python
| 2020-05-18T19:09:42Z |
test/units/cli/test_galaxy.py
|
# -*- coding: utf-8 -*-
# (c) 2016, Adrian Likins <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ansible
import json
import os
import pytest
import shutil
import stat
import tarfile
import tempfile
import yaml
import ansible.constants as C
from ansible import context
from ansible.cli.galaxy import GalaxyCLI
from ansible.galaxy.api import GalaxyAPI
from ansible.errors import AnsibleError
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.utils import context_objects as co
from units.compat import unittest
from units.compat.mock import patch, MagicMock
@pytest.fixture(autouse='function')
def reset_cli_args():
co.GlobalCLIArgs._Singleton__instance = None
yield
co.GlobalCLIArgs._Singleton__instance = None
class TestGalaxy(unittest.TestCase):
@classmethod
def setUpClass(cls):
'''creating prerequisites for installing a role; setUpClass occurs ONCE whereas setUp occurs with every method tested.'''
# class data for easy viewing: role_dir, role_tar, role_name, role_req, role_path
cls.temp_dir = tempfile.mkdtemp(prefix='ansible-test_galaxy-')
os.chdir(cls.temp_dir)
if os.path.exists("./delete_me"):
shutil.rmtree("./delete_me")
# creating framework for a role
gc = GalaxyCLI(args=["ansible-galaxy", "init", "--offline", "delete_me"])
gc.run()
cls.role_dir = "./delete_me"
cls.role_name = "delete_me"
# making a temp dir for role installation
cls.role_path = os.path.join(tempfile.mkdtemp(), "roles")
if not os.path.isdir(cls.role_path):
os.makedirs(cls.role_path)
# creating a tar file name for class data
cls.role_tar = './delete_me.tar.gz'
cls.makeTar(cls.role_tar, cls.role_dir)
# creating a temp file with installation requirements
cls.role_req = './delete_me_requirements.yml'
fd = open(cls.role_req, "w")
fd.write("- 'src': '%s'\n 'name': '%s'\n 'path': '%s'" % (cls.role_tar, cls.role_name, cls.role_path))
fd.close()
@classmethod
def makeTar(cls, output_file, source_dir):
''' used for making a tarfile from a role directory '''
# adding directory into a tar file
try:
tar = tarfile.open(output_file, "w:gz")
tar.add(source_dir, arcname=os.path.basename(source_dir))
except AttributeError: # tarfile obj. has no attribute __exit__ prior to python 2. 7
pass
finally: # ensuring closure of tarfile obj
tar.close()
@classmethod
def tearDownClass(cls):
'''After tests are finished removes things created in setUpClass'''
# deleting the temp role directory
if os.path.exists(cls.role_dir):
shutil.rmtree(cls.role_dir)
if os.path.exists(cls.role_req):
os.remove(cls.role_req)
if os.path.exists(cls.role_tar):
os.remove(cls.role_tar)
if os.path.isdir(cls.role_path):
shutil.rmtree(cls.role_path)
os.chdir('/')
shutil.rmtree(cls.temp_dir)
def setUp(self):
# Reset the stored command line args
co.GlobalCLIArgs._Singleton__instance = None
self.default_args = ['ansible-galaxy']
def tearDown(self):
# Reset the stored command line args
co.GlobalCLIArgs._Singleton__instance = None
def test_init(self):
galaxy_cli = GalaxyCLI(args=self.default_args)
self.assertTrue(isinstance(galaxy_cli, GalaxyCLI))
def test_display_min(self):
gc = GalaxyCLI(args=self.default_args)
role_info = {'name': 'some_role_name'}
display_result = gc._display_role_info(role_info)
self.assertTrue(display_result.find('some_role_name') > -1)
def test_display_galaxy_info(self):
gc = GalaxyCLI(args=self.default_args)
galaxy_info = {}
role_info = {'name': 'some_role_name',
'galaxy_info': galaxy_info}
display_result = gc._display_role_info(role_info)
if display_result.find('\n\tgalaxy_info:') == -1:
self.fail('Expected galaxy_info to be indented once')
def test_run(self):
''' verifies that the GalaxyCLI object's api is created and that execute() is called. '''
gc = GalaxyCLI(args=["ansible-galaxy", "install", "--ignore-errors", "imaginary_role"])
gc.parse()
with patch.object(ansible.cli.CLI, "run", return_value=None) as mock_run:
gc.run()
# testing
self.assertIsInstance(gc.galaxy, ansible.galaxy.Galaxy)
self.assertEqual(mock_run.call_count, 1)
self.assertTrue(isinstance(gc.api, ansible.galaxy.api.GalaxyAPI))
def test_execute_remove(self):
# installing role
gc = GalaxyCLI(args=["ansible-galaxy", "install", "-p", self.role_path, "-r", self.role_req, '--force'])
gc.run()
# location where the role was installed
role_file = os.path.join(self.role_path, self.role_name)
# removing role
# Have to reset the arguments in the context object manually since we're doing the
# equivalent of running the command line program twice
co.GlobalCLIArgs._Singleton__instance = None
gc = GalaxyCLI(args=["ansible-galaxy", "remove", role_file, self.role_name])
gc.run()
# testing role was removed
removed_role = not os.path.exists(role_file)
self.assertTrue(removed_role)
def test_exit_without_ignore_without_flag(self):
''' tests that GalaxyCLI exits with the error specified if the --ignore-errors flag is not used '''
gc = GalaxyCLI(args=["ansible-galaxy", "install", "--server=None", "fake_role_name"])
with patch.object(ansible.utils.display.Display, "display", return_value=None) as mocked_display:
# testing that error expected is raised
self.assertRaises(AnsibleError, gc.run)
self.assertTrue(mocked_display.called_once_with("- downloading role 'fake_role_name', owned by "))
def test_exit_without_ignore_with_flag(self):
''' tests that GalaxyCLI exits without the error specified if the --ignore-errors flag is used '''
# testing with --ignore-errors flag
gc = GalaxyCLI(args=["ansible-galaxy", "install", "--server=None", "fake_role_name", "--ignore-errors"])
with patch.object(ansible.utils.display.Display, "display", return_value=None) as mocked_display:
gc.run()
self.assertTrue(mocked_display.called_once_with("- downloading role 'fake_role_name', owned by "))
def test_parse_no_action(self):
''' testing the options parser when no action is given '''
gc = GalaxyCLI(args=["ansible-galaxy", ""])
self.assertRaises(SystemExit, gc.parse)
def test_parse_invalid_action(self):
''' testing the options parser when an invalid action is given '''
gc = GalaxyCLI(args=["ansible-galaxy", "NOT_ACTION"])
self.assertRaises(SystemExit, gc.parse)
def test_parse_delete(self):
''' testing the options parser when the action 'delete' is given '''
gc = GalaxyCLI(args=["ansible-galaxy", "delete", "foo", "bar"])
gc.parse()
self.assertEqual(context.CLIARGS['verbosity'], 0)
def test_parse_import(self):
''' testing the options parser when the action 'import' is given '''
gc = GalaxyCLI(args=["ansible-galaxy", "import", "foo", "bar"])
gc.parse()
self.assertEqual(context.CLIARGS['wait'], True)
self.assertEqual(context.CLIARGS['reference'], None)
self.assertEqual(context.CLIARGS['check_status'], False)
self.assertEqual(context.CLIARGS['verbosity'], 0)
def test_parse_info(self):
''' testing the options parser when the action 'info' is given '''
gc = GalaxyCLI(args=["ansible-galaxy", "info", "foo", "bar"])
gc.parse()
self.assertEqual(context.CLIARGS['offline'], False)
def test_parse_init(self):
''' testing the options parser when the action 'init' is given '''
gc = GalaxyCLI(args=["ansible-galaxy", "init", "foo"])
gc.parse()
self.assertEqual(context.CLIARGS['offline'], False)
self.assertEqual(context.CLIARGS['force'], False)
def test_parse_install(self):
''' testing the options parser when the action 'install' is given '''
gc = GalaxyCLI(args=["ansible-galaxy", "install"])
gc.parse()
self.assertEqual(context.CLIARGS['ignore_errors'], False)
self.assertEqual(context.CLIARGS['no_deps'], False)
self.assertEqual(context.CLIARGS['role_file'], None)
self.assertEqual(context.CLIARGS['force'], False)
def test_parse_list(self):
''' testing the options parser when the action 'list' is given '''
gc = GalaxyCLI(args=["ansible-galaxy", "list"])
gc.parse()
self.assertEqual(context.CLIARGS['verbosity'], 0)
def test_parse_login(self):
''' testing the options parser when the action 'login' is given '''
gc = GalaxyCLI(args=["ansible-galaxy", "login"])
gc.parse()
self.assertEqual(context.CLIARGS['verbosity'], 0)
self.assertEqual(context.CLIARGS['token'], None)
def test_parse_remove(self):
''' testing the options parser when the action 'remove' is given '''
gc = GalaxyCLI(args=["ansible-galaxy", "remove", "foo"])
gc.parse()
self.assertEqual(context.CLIARGS['verbosity'], 0)
def test_parse_search(self):
''' testing the options parswer when the action 'search' is given '''
gc = GalaxyCLI(args=["ansible-galaxy", "search"])
gc.parse()
self.assertEqual(context.CLIARGS['platforms'], None)
self.assertEqual(context.CLIARGS['galaxy_tags'], None)
self.assertEqual(context.CLIARGS['author'], None)
def test_parse_setup(self):
''' testing the options parser when the action 'setup' is given '''
gc = GalaxyCLI(args=["ansible-galaxy", "setup", "source", "github_user", "github_repo", "secret"])
gc.parse()
self.assertEqual(context.CLIARGS['verbosity'], 0)
self.assertEqual(context.CLIARGS['remove_id'], None)
self.assertEqual(context.CLIARGS['setup_list'], False)
class ValidRoleTests(object):
expected_role_dirs = ('defaults', 'files', 'handlers', 'meta', 'tasks', 'templates', 'vars', 'tests')
@classmethod
def setUpRole(cls, role_name, galaxy_args=None, skeleton_path=None, use_explicit_type=False):
if galaxy_args is None:
galaxy_args = []
if skeleton_path is not None:
cls.role_skeleton_path = skeleton_path
galaxy_args += ['--role-skeleton', skeleton_path]
# Make temp directory for testing
cls.test_dir = tempfile.mkdtemp()
if not os.path.isdir(cls.test_dir):
os.makedirs(cls.test_dir)
cls.role_dir = os.path.join(cls.test_dir, role_name)
cls.role_name = role_name
# create role using default skeleton
args = ['ansible-galaxy']
if use_explicit_type:
args += ['role']
args += ['init', '-c', '--offline'] + galaxy_args + ['--init-path', cls.test_dir, cls.role_name]
gc = GalaxyCLI(args=args)
gc.run()
cls.gc = gc
if skeleton_path is None:
cls.role_skeleton_path = gc.galaxy.default_role_skeleton_path
@classmethod
def tearDownClass(cls):
if os.path.isdir(cls.test_dir):
shutil.rmtree(cls.test_dir)
def test_metadata(self):
with open(os.path.join(self.role_dir, 'meta', 'main.yml'), 'r') as mf:
metadata = yaml.safe_load(mf)
self.assertIn('galaxy_info', metadata, msg='unable to find galaxy_info in metadata')
self.assertIn('dependencies', metadata, msg='unable to find dependencies in metadata')
def test_readme(self):
readme_path = os.path.join(self.role_dir, 'README.md')
self.assertTrue(os.path.exists(readme_path), msg='Readme doesn\'t exist')
def test_main_ymls(self):
need_main_ymls = set(self.expected_role_dirs) - set(['meta', 'tests', 'files', 'templates'])
for d in need_main_ymls:
main_yml = os.path.join(self.role_dir, d, 'main.yml')
self.assertTrue(os.path.exists(main_yml))
expected_string = "---\n# {0} file for {1}".format(d, self.role_name)
with open(main_yml, 'r') as f:
self.assertEqual(expected_string, f.read().strip())
def test_role_dirs(self):
for d in self.expected_role_dirs:
self.assertTrue(os.path.isdir(os.path.join(self.role_dir, d)), msg="Expected role subdirectory {0} doesn't exist".format(d))
def test_travis_yml(self):
with open(os.path.join(self.role_dir, '.travis.yml'), 'r') as f:
contents = f.read()
with open(os.path.join(self.role_skeleton_path, '.travis.yml'), 'r') as f:
expected_contents = f.read()
self.assertEqual(expected_contents, contents, msg='.travis.yml does not match expected')
def test_readme_contents(self):
with open(os.path.join(self.role_dir, 'README.md'), 'r') as readme:
contents = readme.read()
with open(os.path.join(self.role_skeleton_path, 'README.md'), 'r') as f:
expected_contents = f.read()
self.assertEqual(expected_contents, contents, msg='README.md does not match expected')
def test_test_yml(self):
with open(os.path.join(self.role_dir, 'tests', 'test.yml'), 'r') as f:
test_playbook = yaml.safe_load(f)
print(test_playbook)
self.assertEqual(len(test_playbook), 1)
self.assertEqual(test_playbook[0]['hosts'], 'localhost')
self.assertEqual(test_playbook[0]['remote_user'], 'root')
self.assertListEqual(test_playbook[0]['roles'], [self.role_name], msg='The list of roles included in the test play doesn\'t match')
class TestGalaxyInitDefault(unittest.TestCase, ValidRoleTests):
@classmethod
def setUpClass(cls):
cls.setUpRole(role_name='delete_me')
def test_metadata_contents(self):
with open(os.path.join(self.role_dir, 'meta', 'main.yml'), 'r') as mf:
metadata = yaml.safe_load(mf)
self.assertEqual(metadata.get('galaxy_info', dict()).get('author'), 'your name', msg='author was not set properly in metadata')
class TestGalaxyInitAPB(unittest.TestCase, ValidRoleTests):
@classmethod
def setUpClass(cls):
cls.setUpRole('delete_me_apb', galaxy_args=['--type=apb'])
def test_metadata_apb_tag(self):
with open(os.path.join(self.role_dir, 'meta', 'main.yml'), 'r') as mf:
metadata = yaml.safe_load(mf)
self.assertIn('apb', metadata.get('galaxy_info', dict()).get('galaxy_tags', []), msg='apb tag not set in role metadata')
def test_metadata_contents(self):
with open(os.path.join(self.role_dir, 'meta', 'main.yml'), 'r') as mf:
metadata = yaml.safe_load(mf)
self.assertEqual(metadata.get('galaxy_info', dict()).get('author'), 'your name', msg='author was not set properly in metadata')
def test_apb_yml(self):
self.assertTrue(os.path.exists(os.path.join(self.role_dir, 'apb.yml')), msg='apb.yml was not created')
def test_test_yml(self):
with open(os.path.join(self.role_dir, 'tests', 'test.yml'), 'r') as f:
test_playbook = yaml.safe_load(f)
print(test_playbook)
self.assertEqual(len(test_playbook), 1)
self.assertEqual(test_playbook[0]['hosts'], 'localhost')
self.assertFalse(test_playbook[0]['gather_facts'])
self.assertEqual(test_playbook[0]['connection'], 'local')
self.assertIsNone(test_playbook[0]['tasks'], msg='We\'re expecting an unset list of tasks in test.yml')
class TestGalaxyInitContainer(unittest.TestCase, ValidRoleTests):
@classmethod
def setUpClass(cls):
cls.setUpRole('delete_me_container', galaxy_args=['--type=container'])
def test_metadata_container_tag(self):
with open(os.path.join(self.role_dir, 'meta', 'main.yml'), 'r') as mf:
metadata = yaml.safe_load(mf)
self.assertIn('container', metadata.get('galaxy_info', dict()).get('galaxy_tags', []), msg='container tag not set in role metadata')
def test_metadata_contents(self):
with open(os.path.join(self.role_dir, 'meta', 'main.yml'), 'r') as mf:
metadata = yaml.safe_load(mf)
self.assertEqual(metadata.get('galaxy_info', dict()).get('author'), 'your name', msg='author was not set properly in metadata')
def test_meta_container_yml(self):
self.assertTrue(os.path.exists(os.path.join(self.role_dir, 'meta', 'container.yml')), msg='container.yml was not created')
def test_test_yml(self):
with open(os.path.join(self.role_dir, 'tests', 'test.yml'), 'r') as f:
test_playbook = yaml.safe_load(f)
print(test_playbook)
self.assertEqual(len(test_playbook), 1)
self.assertEqual(test_playbook[0]['hosts'], 'localhost')
self.assertFalse(test_playbook[0]['gather_facts'])
self.assertEqual(test_playbook[0]['connection'], 'local')
self.assertIsNone(test_playbook[0]['tasks'], msg='We\'re expecting an unset list of tasks in test.yml')
class TestGalaxyInitSkeleton(unittest.TestCase, ValidRoleTests):
@classmethod
def setUpClass(cls):
role_skeleton_path = os.path.join(os.path.split(__file__)[0], 'test_data', 'role_skeleton')
cls.setUpRole('delete_me_skeleton', skeleton_path=role_skeleton_path, use_explicit_type=True)
def test_empty_files_dir(self):
files_dir = os.path.join(self.role_dir, 'files')
self.assertTrue(os.path.isdir(files_dir))
self.assertListEqual(os.listdir(files_dir), [], msg='we expect the files directory to be empty, is ignore working?')
def test_template_ignore_jinja(self):
test_conf_j2 = os.path.join(self.role_dir, 'templates', 'test.conf.j2')
self.assertTrue(os.path.exists(test_conf_j2), msg="The test.conf.j2 template doesn't seem to exist, is it being rendered as test.conf?")
with open(test_conf_j2, 'r') as f:
contents = f.read()
expected_contents = '[defaults]\ntest_key = {{ test_variable }}'
self.assertEqual(expected_contents, contents.strip(), msg="test.conf.j2 doesn't contain what it should, is it being rendered?")
def test_template_ignore_jinja_subfolder(self):
test_conf_j2 = os.path.join(self.role_dir, 'templates', 'subfolder', 'test.conf.j2')
self.assertTrue(os.path.exists(test_conf_j2), msg="The test.conf.j2 template doesn't seem to exist, is it being rendered as test.conf?")
with open(test_conf_j2, 'r') as f:
contents = f.read()
expected_contents = '[defaults]\ntest_key = {{ test_variable }}'
self.assertEqual(expected_contents, contents.strip(), msg="test.conf.j2 doesn't contain what it should, is it being rendered?")
def test_template_ignore_similar_folder(self):
self.assertTrue(os.path.exists(os.path.join(self.role_dir, 'templates_extra', 'templates.txt')))
def test_skeleton_option(self):
self.assertEqual(self.role_skeleton_path, context.CLIARGS['role_skeleton'], msg='Skeleton path was not parsed properly from the command line')
@pytest.mark.parametrize('cli_args, expected', [
(['ansible-galaxy', 'collection', 'init', 'abc.def'], 0),
(['ansible-galaxy', 'collection', 'init', 'abc.def', '-vvv'], 3),
(['ansible-galaxy', '-vv', 'collection', 'init', 'abc.def'], 2),
# Due to our manual parsing we want to verify that -v set in the sub parser takes precedence. This behaviour is
# deprecated and tests should be removed when the code that handles it is removed
(['ansible-galaxy', '-vv', 'collection', 'init', 'abc.def', '-v'], 1),
(['ansible-galaxy', '-vv', 'collection', 'init', 'abc.def', '-vvvv'], 4),
(['ansible-galaxy', '-vvv', 'init', 'name'], 3),
(['ansible-galaxy', '-vvvvv', 'init', '-v', 'name'], 1),
])
def test_verbosity_arguments(cli_args, expected, monkeypatch):
# Mock out the functions so we don't actually execute anything
for func_name in [f for f in dir(GalaxyCLI) if f.startswith("execute_")]:
monkeypatch.setattr(GalaxyCLI, func_name, MagicMock())
cli = GalaxyCLI(args=cli_args)
cli.run()
assert context.CLIARGS['verbosity'] == expected
@pytest.fixture()
def collection_skeleton(request, tmp_path_factory):
name, skeleton_path = request.param
galaxy_args = ['ansible-galaxy', 'collection', 'init', '-c']
if skeleton_path is not None:
galaxy_args += ['--collection-skeleton', skeleton_path]
test_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections'))
galaxy_args += ['--init-path', test_dir, name]
GalaxyCLI(args=galaxy_args).run()
namespace_name, collection_name = name.split('.', 1)
collection_dir = os.path.join(test_dir, namespace_name, collection_name)
return collection_dir
@pytest.mark.parametrize('collection_skeleton', [
('ansible_test.my_collection', None),
], indirect=True)
def test_collection_default(collection_skeleton):
meta_path = os.path.join(collection_skeleton, 'galaxy.yml')
with open(meta_path, 'r') as galaxy_meta:
metadata = yaml.safe_load(galaxy_meta)
assert metadata['namespace'] == 'ansible_test'
assert metadata['name'] == 'my_collection'
assert metadata['authors'] == ['your name <[email protected]>']
assert metadata['readme'] == 'README.md'
assert metadata['version'] == '1.0.0'
assert metadata['description'] == 'your collection description'
assert metadata['license'] == ['GPL-2.0-or-later']
assert metadata['tags'] == []
assert metadata['dependencies'] == {}
assert metadata['documentation'] == 'http://docs.example.com'
assert metadata['repository'] == 'http://example.com/repository'
assert metadata['homepage'] == 'http://example.com'
assert metadata['issues'] == 'http://example.com/issue/tracker'
for d in ['docs', 'plugins', 'roles']:
assert os.path.isdir(os.path.join(collection_skeleton, d)), \
"Expected collection subdirectory {0} doesn't exist".format(d)
@pytest.mark.parametrize('collection_skeleton', [
('ansible_test.delete_me_skeleton', os.path.join(os.path.split(__file__)[0], 'test_data', 'collection_skeleton')),
], indirect=True)
def test_collection_skeleton(collection_skeleton):
meta_path = os.path.join(collection_skeleton, 'galaxy.yml')
with open(meta_path, 'r') as galaxy_meta:
metadata = yaml.safe_load(galaxy_meta)
assert metadata['namespace'] == 'ansible_test'
assert metadata['name'] == 'delete_me_skeleton'
assert metadata['authors'] == ['Ansible Cow <[email protected]>', 'Tu Cow <[email protected]>']
assert metadata['version'] == '0.1.0'
assert metadata['readme'] == 'README.md'
assert len(metadata) == 5
assert os.path.exists(os.path.join(collection_skeleton, 'README.md'))
# Test empty directories exist and are empty
for empty_dir in ['plugins/action', 'plugins/filter', 'plugins/inventory', 'plugins/lookup',
'plugins/module_utils', 'plugins/modules']:
assert os.listdir(os.path.join(collection_skeleton, empty_dir)) == []
# Test files that don't end with .j2 were not templated
doc_file = os.path.join(collection_skeleton, 'docs', 'My Collection.md')
with open(doc_file, 'r') as f:
doc_contents = f.read()
assert doc_contents.strip() == 'Welcome to my test collection doc for {{ namespace }}.'
# Test files that end with .j2 but are in the templates directory were not templated
for template_dir in ['playbooks/templates', 'playbooks/templates/subfolder',
'roles/common/templates', 'roles/common/templates/subfolder']:
test_conf_j2 = os.path.join(collection_skeleton, template_dir, 'test.conf.j2')
assert os.path.exists(test_conf_j2)
with open(test_conf_j2, 'r') as f:
contents = f.read()
expected_contents = '[defaults]\ntest_key = {{ test_variable }}'
assert expected_contents == contents.strip()
@pytest.fixture()
def collection_artifact(collection_skeleton, tmp_path_factory):
''' Creates a collection artifact tarball that is ready to be published and installed '''
output_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Output'))
# Create a file with +x in the collection so we can test the permissions
execute_path = os.path.join(collection_skeleton, 'runme.sh')
with open(execute_path, mode='wb') as fd:
fd.write(b"echo hi")
# S_ISUID should not be present on extraction.
os.chmod(execute_path, os.stat(execute_path).st_mode | stat.S_ISUID | stat.S_IEXEC)
# Because we call GalaxyCLI in collection_skeleton we need to reset the singleton back to None so it uses the new
# args, we reset the original args once it is done.
orig_cli_args = co.GlobalCLIArgs._Singleton__instance
try:
co.GlobalCLIArgs._Singleton__instance = None
galaxy_args = ['ansible-galaxy', 'collection', 'build', collection_skeleton, '--output-path', output_dir]
gc = GalaxyCLI(args=galaxy_args)
gc.run()
yield output_dir
finally:
co.GlobalCLIArgs._Singleton__instance = orig_cli_args
def test_invalid_skeleton_path():
expected = "- the skeleton path '/fake/path' does not exist, cannot init collection"
gc = GalaxyCLI(args=['ansible-galaxy', 'collection', 'init', 'my.collection', '--collection-skeleton',
'/fake/path'])
with pytest.raises(AnsibleError, match=expected):
gc.run()
@pytest.mark.parametrize("name", [
"",
"invalid",
"hypen-ns.collection",
"ns.hyphen-collection",
"ns.collection.weird",
])
def test_invalid_collection_name_init(name):
expected = "Invalid collection name '%s', name must be in the format <namespace>.<collection>" % name
gc = GalaxyCLI(args=['ansible-galaxy', 'collection', 'init', name])
with pytest.raises(AnsibleError, match=expected):
gc.run()
@pytest.mark.parametrize("name, expected", [
("", ""),
("invalid", "invalid"),
("invalid:1.0.0", "invalid"),
("hypen-ns.collection", "hypen-ns.collection"),
("ns.hyphen-collection", "ns.hyphen-collection"),
("ns.collection.weird", "ns.collection.weird"),
])
def test_invalid_collection_name_install(name, expected, tmp_path_factory):
install_path = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections'))
expected = "Invalid collection name '%s', name must be in the format <namespace>.<collection>" % expected
gc = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', name, '-p', os.path.join(install_path, 'install')])
with pytest.raises(AnsibleError, match=expected):
gc.run()
@pytest.mark.parametrize('collection_skeleton', [
('ansible_test.build_collection', None),
], indirect=True)
def test_collection_build(collection_artifact):
tar_path = os.path.join(collection_artifact, 'ansible_test-build_collection-1.0.0.tar.gz')
assert tarfile.is_tarfile(tar_path)
with tarfile.open(tar_path, mode='r') as tar:
tar_members = tar.getmembers()
valid_files = ['MANIFEST.json', 'FILES.json', 'roles', 'docs', 'plugins', 'plugins/README.md', 'README.md',
'runme.sh']
assert len(tar_members) == len(valid_files)
# Verify the uid and gid is 0 and the correct perms are set
for member in tar_members:
assert member.name in valid_files
assert member.gid == 0
assert member.gname == ''
assert member.uid == 0
assert member.uname == ''
if member.isdir() or member.name == 'runme.sh':
assert member.mode == 0o0755
else:
assert member.mode == 0o0644
manifest_file = tar.extractfile(tar_members[0])
try:
manifest = json.loads(to_text(manifest_file.read()))
finally:
manifest_file.close()
coll_info = manifest['collection_info']
file_manifest = manifest['file_manifest_file']
assert manifest['format'] == 1
assert len(manifest.keys()) == 3
assert coll_info['namespace'] == 'ansible_test'
assert coll_info['name'] == 'build_collection'
assert coll_info['version'] == '1.0.0'
assert coll_info['authors'] == ['your name <[email protected]>']
assert coll_info['readme'] == 'README.md'
assert coll_info['tags'] == []
assert coll_info['description'] == 'your collection description'
assert coll_info['license'] == ['GPL-2.0-or-later']
assert coll_info['license_file'] is None
assert coll_info['dependencies'] == {}
assert coll_info['repository'] == 'http://example.com/repository'
assert coll_info['documentation'] == 'http://docs.example.com'
assert coll_info['homepage'] == 'http://example.com'
assert coll_info['issues'] == 'http://example.com/issue/tracker'
assert len(coll_info.keys()) == 14
assert file_manifest['name'] == 'FILES.json'
assert file_manifest['ftype'] == 'file'
assert file_manifest['chksum_type'] == 'sha256'
assert file_manifest['chksum_sha256'] is not None # Order of keys makes it hard to verify the checksum
assert file_manifest['format'] == 1
assert len(file_manifest.keys()) == 5
files_file = tar.extractfile(tar_members[1])
try:
files = json.loads(to_text(files_file.read()))
finally:
files_file.close()
assert len(files['files']) == 7
assert files['format'] == 1
assert len(files.keys()) == 2
valid_files_entries = ['.', 'roles', 'docs', 'plugins', 'plugins/README.md', 'README.md', 'runme.sh']
for file_entry in files['files']:
assert file_entry['name'] in valid_files_entries
assert file_entry['format'] == 1
if file_entry['name'] in ['plugins/README.md', 'runme.sh']:
assert file_entry['ftype'] == 'file'
assert file_entry['chksum_type'] == 'sha256'
# Can't test the actual checksum as the html link changes based on the version or the file contents
# don't matter
assert file_entry['chksum_sha256'] is not None
elif file_entry['name'] == 'README.md':
assert file_entry['ftype'] == 'file'
assert file_entry['chksum_type'] == 'sha256'
assert file_entry['chksum_sha256'] == '6d8b5f9b5d53d346a8cd7638a0ec26e75e8d9773d952162779a49d25da6ef4f5'
else:
assert file_entry['ftype'] == 'dir'
assert file_entry['chksum_type'] is None
assert file_entry['chksum_sha256'] is None
assert len(file_entry.keys()) == 5
@pytest.fixture()
def collection_install(reset_cli_args, tmp_path_factory, monkeypatch):
mock_install = MagicMock()
monkeypatch.setattr(ansible.cli.galaxy, 'install_collections', mock_install)
mock_warning = MagicMock()
monkeypatch.setattr(ansible.utils.display.Display, 'warning', mock_warning)
output_dir = to_text((tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Output')))
yield mock_install, mock_warning, output_dir
def test_collection_install_with_names(collection_install):
mock_install, mock_warning, output_dir = collection_install
galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', 'namespace2.collection:1.0.1',
'--collections-path', output_dir]
GalaxyCLI(args=galaxy_args).run()
collection_path = os.path.join(output_dir, 'ansible_collections')
assert os.path.isdir(collection_path)
assert mock_warning.call_count == 1
assert "The specified collections path '%s' is not part of the configured Ansible collections path" % output_dir \
in mock_warning.call_args[0][0]
assert mock_install.call_count == 1
assert mock_install.call_args[0][0] == [('namespace.collection', '*', None),
('namespace2.collection', '1.0.1', None)]
assert mock_install.call_args[0][1] == collection_path
assert len(mock_install.call_args[0][2]) == 1
assert mock_install.call_args[0][2][0].api_server == 'https://galaxy.ansible.com'
assert mock_install.call_args[0][2][0].validate_certs is True
assert mock_install.call_args[0][3] is True
assert mock_install.call_args[0][4] is False
assert mock_install.call_args[0][5] is False
assert mock_install.call_args[0][6] is False
assert mock_install.call_args[0][7] is False
def test_collection_install_with_requirements_file(collection_install):
mock_install, mock_warning, output_dir = collection_install
requirements_file = os.path.join(output_dir, 'requirements.yml')
with open(requirements_file, 'wb') as req_obj:
req_obj.write(b'''---
collections:
- namespace.coll
- name: namespace2.coll
version: '>2.0.1'
''')
galaxy_args = ['ansible-galaxy', 'collection', 'install', '--requirements-file', requirements_file,
'--collections-path', output_dir]
GalaxyCLI(args=galaxy_args).run()
collection_path = os.path.join(output_dir, 'ansible_collections')
assert os.path.isdir(collection_path)
assert mock_warning.call_count == 1
assert "The specified collections path '%s' is not part of the configured Ansible collections path" % output_dir \
in mock_warning.call_args[0][0]
assert mock_install.call_count == 1
assert mock_install.call_args[0][0] == [('namespace.coll', '*', None),
('namespace2.coll', '>2.0.1', None)]
assert mock_install.call_args[0][1] == collection_path
assert len(mock_install.call_args[0][2]) == 1
assert mock_install.call_args[0][2][0].api_server == 'https://galaxy.ansible.com'
assert mock_install.call_args[0][2][0].validate_certs is True
assert mock_install.call_args[0][3] is True
assert mock_install.call_args[0][4] is False
assert mock_install.call_args[0][5] is False
assert mock_install.call_args[0][6] is False
assert mock_install.call_args[0][7] is False
def test_collection_install_with_relative_path(collection_install, monkeypatch):
mock_install = collection_install[0]
mock_req = MagicMock()
mock_req.return_value = {'collections': [('namespace.coll', '*', None)]}
monkeypatch.setattr(ansible.cli.galaxy.GalaxyCLI, '_parse_requirements_file', mock_req)
monkeypatch.setattr(os, 'makedirs', MagicMock())
requirements_file = './requirements.myl'
collections_path = './ansible_collections'
galaxy_args = ['ansible-galaxy', 'collection', 'install', '--requirements-file', requirements_file,
'--collections-path', collections_path]
GalaxyCLI(args=galaxy_args).run()
assert mock_install.call_count == 1
assert mock_install.call_args[0][0] == [('namespace.coll', '*', None)]
assert mock_install.call_args[0][1] == os.path.abspath(collections_path)
assert len(mock_install.call_args[0][2]) == 1
assert mock_install.call_args[0][2][0].api_server == 'https://galaxy.ansible.com'
assert mock_install.call_args[0][2][0].validate_certs is True
assert mock_install.call_args[0][3] is True
assert mock_install.call_args[0][4] is False
assert mock_install.call_args[0][5] is False
assert mock_install.call_args[0][6] is False
assert mock_install.call_args[0][7] is False
assert mock_req.call_count == 1
assert mock_req.call_args[0][0] == os.path.abspath(requirements_file)
def test_collection_install_with_unexpanded_path(collection_install, monkeypatch):
mock_install = collection_install[0]
mock_req = MagicMock()
mock_req.return_value = {'collections': [('namespace.coll', '*', None)]}
monkeypatch.setattr(ansible.cli.galaxy.GalaxyCLI, '_parse_requirements_file', mock_req)
monkeypatch.setattr(os, 'makedirs', MagicMock())
requirements_file = '~/requirements.myl'
collections_path = '~/ansible_collections'
galaxy_args = ['ansible-galaxy', 'collection', 'install', '--requirements-file', requirements_file,
'--collections-path', collections_path]
GalaxyCLI(args=galaxy_args).run()
assert mock_install.call_count == 1
assert mock_install.call_args[0][0] == [('namespace.coll', '*', None)]
assert mock_install.call_args[0][1] == os.path.expanduser(os.path.expandvars(collections_path))
assert len(mock_install.call_args[0][2]) == 1
assert mock_install.call_args[0][2][0].api_server == 'https://galaxy.ansible.com'
assert mock_install.call_args[0][2][0].validate_certs is True
assert mock_install.call_args[0][3] is True
assert mock_install.call_args[0][4] is False
assert mock_install.call_args[0][5] is False
assert mock_install.call_args[0][6] is False
assert mock_install.call_args[0][7] is False
assert mock_req.call_count == 1
assert mock_req.call_args[0][0] == os.path.expanduser(os.path.expandvars(requirements_file))
def test_collection_install_in_collection_dir(collection_install, monkeypatch):
mock_install, mock_warning, output_dir = collection_install
collections_path = C.COLLECTIONS_PATHS[0]
galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', 'namespace2.collection:1.0.1',
'--collections-path', collections_path]
GalaxyCLI(args=galaxy_args).run()
assert mock_warning.call_count == 0
assert mock_install.call_count == 1
assert mock_install.call_args[0][0] == [('namespace.collection', '*', None),
('namespace2.collection', '1.0.1', None)]
assert mock_install.call_args[0][1] == os.path.join(collections_path, 'ansible_collections')
assert len(mock_install.call_args[0][2]) == 1
assert mock_install.call_args[0][2][0].api_server == 'https://galaxy.ansible.com'
assert mock_install.call_args[0][2][0].validate_certs is True
assert mock_install.call_args[0][3] is True
assert mock_install.call_args[0][4] is False
assert mock_install.call_args[0][5] is False
assert mock_install.call_args[0][6] is False
assert mock_install.call_args[0][7] is False
def test_collection_install_with_url(collection_install):
mock_install, dummy, output_dir = collection_install
galaxy_args = ['ansible-galaxy', 'collection', 'install', 'https://foo/bar/foo-bar-v1.0.0.tar.gz',
'--collections-path', output_dir]
GalaxyCLI(args=galaxy_args).run()
collection_path = os.path.join(output_dir, 'ansible_collections')
assert os.path.isdir(collection_path)
assert mock_install.call_count == 1
assert mock_install.call_args[0][0] == [('https://foo/bar/foo-bar-v1.0.0.tar.gz', '*', None)]
assert mock_install.call_args[0][1] == collection_path
assert len(mock_install.call_args[0][2]) == 1
assert mock_install.call_args[0][2][0].api_server == 'https://galaxy.ansible.com'
assert mock_install.call_args[0][2][0].validate_certs is True
assert mock_install.call_args[0][3] is True
assert mock_install.call_args[0][4] is False
assert mock_install.call_args[0][5] is False
assert mock_install.call_args[0][6] is False
assert mock_install.call_args[0][7] is False
def test_collection_install_name_and_requirements_fail(collection_install):
test_path = collection_install[2]
expected = 'The positional collection_name arg and --requirements-file are mutually exclusive.'
with pytest.raises(AnsibleError, match=expected):
GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection', '--collections-path',
test_path, '--requirements-file', test_path]).run()
def test_collection_install_no_name_and_requirements_fail(collection_install):
test_path = collection_install[2]
expected = 'You must specify a collection name or a requirements file.'
with pytest.raises(AnsibleError, match=expected):
GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', '--collections-path', test_path]).run()
def test_collection_install_path_with_ansible_collections(collection_install):
mock_install, mock_warning, output_dir = collection_install
collection_path = os.path.join(output_dir, 'ansible_collections')
galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', 'namespace2.collection:1.0.1',
'--collections-path', collection_path]
GalaxyCLI(args=galaxy_args).run()
assert os.path.isdir(collection_path)
assert mock_warning.call_count == 1
assert "The specified collections path '%s' is not part of the configured Ansible collections path" \
% collection_path in mock_warning.call_args[0][0]
assert mock_install.call_count == 1
assert mock_install.call_args[0][0] == [('namespace.collection', '*', None),
('namespace2.collection', '1.0.1', None)]
assert mock_install.call_args[0][1] == collection_path
assert len(mock_install.call_args[0][2]) == 1
assert mock_install.call_args[0][2][0].api_server == 'https://galaxy.ansible.com'
assert mock_install.call_args[0][2][0].validate_certs is True
assert mock_install.call_args[0][3] is True
assert mock_install.call_args[0][4] is False
assert mock_install.call_args[0][5] is False
assert mock_install.call_args[0][6] is False
assert mock_install.call_args[0][7] is False
def test_collection_install_ignore_certs(collection_install):
mock_install, mock_warning, output_dir = collection_install
galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', '--collections-path', output_dir,
'--ignore-certs']
GalaxyCLI(args=galaxy_args).run()
assert mock_install.call_args[0][3] is False
def test_collection_install_force(collection_install):
mock_install, mock_warning, output_dir = collection_install
galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', '--collections-path', output_dir,
'--force']
GalaxyCLI(args=galaxy_args).run()
assert mock_install.call_args[0][6] is True
def test_collection_install_force_deps(collection_install):
mock_install, mock_warning, output_dir = collection_install
galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', '--collections-path', output_dir,
'--force-with-deps']
GalaxyCLI(args=galaxy_args).run()
assert mock_install.call_args[0][7] is True
def test_collection_install_no_deps(collection_install):
mock_install, mock_warning, output_dir = collection_install
galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', '--collections-path', output_dir,
'--no-deps']
GalaxyCLI(args=galaxy_args).run()
assert mock_install.call_args[0][5] is True
def test_collection_install_ignore(collection_install):
mock_install, mock_warning, output_dir = collection_install
galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', '--collections-path', output_dir,
'--ignore-errors']
GalaxyCLI(args=galaxy_args).run()
assert mock_install.call_args[0][4] is True
def test_collection_install_custom_server(collection_install):
mock_install, mock_warning, output_dir = collection_install
galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', '--collections-path', output_dir,
'--server', 'https://galaxy-dev.ansible.com']
GalaxyCLI(args=galaxy_args).run()
assert len(mock_install.call_args[0][2]) == 1
assert mock_install.call_args[0][2][0].api_server == 'https://galaxy-dev.ansible.com'
assert mock_install.call_args[0][2][0].validate_certs is True
@pytest.fixture()
def requirements_file(request, tmp_path_factory):
content = request.param
test_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Requirements'))
requirements_file = os.path.join(test_dir, 'requirements.yml')
if content:
with open(requirements_file, 'wb') as req_obj:
req_obj.write(to_bytes(content))
yield requirements_file
@pytest.fixture()
def requirements_cli(monkeypatch):
monkeypatch.setattr(GalaxyCLI, 'execute_install', MagicMock())
cli = GalaxyCLI(args=['ansible-galaxy', 'install'])
cli.run()
return cli
@pytest.mark.parametrize('requirements_file', [None], indirect=True)
def test_parse_requirements_file_that_doesnt_exist(requirements_cli, requirements_file):
expected = "The requirements file '%s' does not exist." % to_native(requirements_file)
with pytest.raises(AnsibleError, match=expected):
requirements_cli._parse_requirements_file(requirements_file)
@pytest.mark.parametrize('requirements_file', ['not a valid yml file: hi: world'], indirect=True)
def test_parse_requirements_file_that_isnt_yaml(requirements_cli, requirements_file):
expected = "Failed to parse the requirements yml at '%s' with the following error" % to_native(requirements_file)
with pytest.raises(AnsibleError, match=expected):
requirements_cli._parse_requirements_file(requirements_file)
@pytest.mark.parametrize('requirements_file', [('''
# Older role based requirements.yml
- galaxy.role
- anotherrole
''')], indirect=True)
def test_parse_requirements_in_older_format_illega(requirements_cli, requirements_file):
expected = "Expecting requirements file to be a dict with the key 'collections' that contains a list of " \
"collections to install"
with pytest.raises(AnsibleError, match=expected):
requirements_cli._parse_requirements_file(requirements_file, allow_old_format=False)
@pytest.mark.parametrize('requirements_file', ['''
collections:
- version: 1.0.0
'''], indirect=True)
def test_parse_requirements_without_mandatory_name_key(requirements_cli, requirements_file):
expected = "Collections requirement entry should contain the key name."
with pytest.raises(AnsibleError, match=expected):
requirements_cli._parse_requirements_file(requirements_file)
@pytest.mark.parametrize('requirements_file', [('''
collections:
- namespace.collection1
- namespace.collection2
'''), ('''
collections:
- name: namespace.collection1
- name: namespace.collection2
''')], indirect=True)
def test_parse_requirements(requirements_cli, requirements_file):
expected = {
'roles': [],
'collections': [('namespace.collection1', '*', None), ('namespace.collection2', '*', None)]
}
actual = requirements_cli._parse_requirements_file(requirements_file)
assert actual == expected
@pytest.mark.parametrize('requirements_file', ['''
collections:
- name: namespace.collection1
version: ">=1.0.0,<=2.0.0"
source: https://galaxy-dev.ansible.com
- namespace.collection2'''], indirect=True)
def test_parse_requirements_with_extra_info(requirements_cli, requirements_file):
actual = requirements_cli._parse_requirements_file(requirements_file)
assert len(actual['roles']) == 0
assert len(actual['collections']) == 2
assert actual['collections'][0][0] == 'namespace.collection1'
assert actual['collections'][0][1] == '>=1.0.0,<=2.0.0'
assert actual['collections'][0][2].api_server == 'https://galaxy-dev.ansible.com'
assert actual['collections'][0][2].name == 'explicit_requirement_namespace.collection1'
assert actual['collections'][0][2].token is None
assert actual['collections'][0][2].username is None
assert actual['collections'][0][2].password is None
assert actual['collections'][0][2].validate_certs is True
assert actual['collections'][1] == ('namespace.collection2', '*', None)
@pytest.mark.parametrize('requirements_file', ['''
roles:
- username.role_name
- src: username2.role_name2
- src: ssh://github.com/user/repo
scm: git
collections:
- namespace.collection2
'''], indirect=True)
def test_parse_requirements_with_roles_and_collections(requirements_cli, requirements_file):
actual = requirements_cli._parse_requirements_file(requirements_file)
assert len(actual['roles']) == 3
assert actual['roles'][0].name == 'username.role_name'
assert actual['roles'][1].name == 'username2.role_name2'
assert actual['roles'][2].name == 'repo'
assert actual['roles'][2].src == 'ssh://github.com/user/repo'
assert len(actual['collections']) == 1
assert actual['collections'][0] == ('namespace.collection2', '*', None)
@pytest.mark.parametrize('requirements_file', ['''
collections:
- name: namespace.collection
- name: namespace2.collection2
source: https://galaxy-dev.ansible.com/
- name: namespace3.collection3
source: server
'''], indirect=True)
def test_parse_requirements_with_collection_source(requirements_cli, requirements_file):
galaxy_api = GalaxyAPI(requirements_cli.api, 'server', 'https://config-server')
requirements_cli.api_servers.append(galaxy_api)
actual = requirements_cli._parse_requirements_file(requirements_file)
assert actual['roles'] == []
assert len(actual['collections']) == 3
assert actual['collections'][0] == ('namespace.collection', '*', None)
assert actual['collections'][1][0] == 'namespace2.collection2'
assert actual['collections'][1][1] == '*'
assert actual['collections'][1][2].api_server == 'https://galaxy-dev.ansible.com/'
assert actual['collections'][1][2].name == 'explicit_requirement_namespace2.collection2'
assert actual['collections'][1][2].token is None
assert actual['collections'][2] == ('namespace3.collection3', '*', galaxy_api)
@pytest.mark.parametrize('requirements_file', ['''
- username.included_role
- src: https://github.com/user/repo
'''], indirect=True)
def test_parse_requirements_roles_with_include(requirements_cli, requirements_file):
reqs = [
'ansible.role',
{'include': requirements_file},
]
parent_requirements = os.path.join(os.path.dirname(requirements_file), 'parent.yaml')
with open(to_bytes(parent_requirements), 'wb') as req_fd:
req_fd.write(to_bytes(yaml.safe_dump(reqs)))
actual = requirements_cli._parse_requirements_file(parent_requirements)
assert len(actual['roles']) == 3
assert actual['collections'] == []
assert actual['roles'][0].name == 'ansible.role'
assert actual['roles'][1].name == 'username.included_role'
assert actual['roles'][2].name == 'repo'
assert actual['roles'][2].src == 'https://github.com/user/repo'
@pytest.mark.parametrize('requirements_file', ['''
- username.role
- include: missing.yml
'''], indirect=True)
def test_parse_requirements_roles_with_include_missing(requirements_cli, requirements_file):
expected = "Failed to find include requirements file 'missing.yml' in '%s'" % to_native(requirements_file)
with pytest.raises(AnsibleError, match=expected):
requirements_cli._parse_requirements_file(requirements_file)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,673 |
ansible-galaxy install user experience is disastrous if you use roles and collections
|
##### SUMMARY
If I create one `requirements.yml` file which lists all my Ansible dependencies for a given project, including roles and collections from Galaxy, and try to install the dependencies using `ansible-galaxy`, it results in an unexpected and unintuitive behavior.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
```
ansible 2.9.5
config file = /Users/jgeerling/Downloads/blend-test/ansible.cfg
configured module search path = ['/Users/jgeerling/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.6 (default, Feb 9 2020, 13:28:08) [Clang 11.0.0 (clang-1100.0.33.17)]
```
##### CONFIGURATION
```
ANSIBLE_NOCOWS(/Users/jgeerling/Downloads/blend-test/ansible.cfg) = True
COLLECTIONS_PATHS(/Users/jgeerling/Downloads/blend-test/ansible.cfg) = ['/Users/jgeerling/Downloads/blend-test']
DEFAULT_ROLES_PATH(/Users/jgeerling/Downloads/blend-test/ansible.cfg) = ['/Users/jgeerling/Downloads/blend-test/roles']
```
##### OS / ENVIRONMENT
- macOS Catalina and Ubuntu
##### STEPS TO REPRODUCE
Create a new playbook project directory (e.g. 'example'). Inside the directory, create a `requirements.yml` file with the contents:
```yaml
---
roles:
# Install a role from Ansible Galaxy.
- name: geerlingguy.java
version: 1.9.7
collections:
# Install a collection from Ansible Galaxy.
- name: geerlingguy.php_roles
version: 0.9.5
source: https://galaxy.ansible.com
```
Run:
ansible-galaxy install -r requirements.yml
##### EXPECTED RESULTS
All my project dependencies are installed as listed in the `requirements.yml` file.
##### ACTUAL RESULTS
```
$ ansible-galaxy install -r requirements.yml
- downloading role 'java', owned by geerlingguy
- downloading role from https://github.com/geerlingguy/ansible-role-java/archive/1.9.7.tar.gz
- extracting geerlingguy.java to /Users/jgeerling/.ansible/roles/geerlingguy.java
- geerlingguy.java (1.9.7) was installed successfully
$ echo $?
0
```
This command works, and results in only the defined _role_ to install. So after being confused as to why the _collection_ doesn't get installed, I read through the Collections documentation and find that, if I want collections installed, I have to run a separate command. To illustrate:
```
# This will only install roles (and give no warning that collections were detected but not installed).
$ ansible-galaxy install -r requirements.yml
# This results in the same behavior as above.
$ ansible-galaxy role install -r requirements.yml
# This will only install collections (and give no warning that roles were detected but not installed).
$ ansible-galaxy collection install -r requirements.yml
```
Because most of my projects will either require roles or roles and collections in the `requirements.yml` file in the coming months/years (and will likely remain that way for the next few years as many roles on Galaxy that I depend on are not making the jump to Collections (especially simpler ones that are minimally maintained), this UX is kind of difficult to stomach, as now I'll need to make sure to have users do _two_ things every time they use or update one of my playbook projects (and CI will also need to run two commands).
It would make more sense to me to have the following behavior:
```
# Installs everything in requirements.yml (collections and roles).
$ ansible-galaxy install -r requirements.yml
# Only installs roles in requirements.yml (displays warning that there are also collections present in the file, if there are any).
$ ansible-galaxy role install -r requirements.yml
# Only installs collections in requirements.yml (displays warning that there are also roles present in the file, if there are any).
$ ansible-galaxy collection install -r requirements.yml
```
|
https://github.com/ansible/ansible/issues/65673
|
https://github.com/ansible/ansible/pull/67843
|
01e7915b0a9778a934a0f0e9e9d110dbef7e31ec
|
ecea15c508f0e081525be036cf76bbb56dbcdd9d
| 2019-12-09T19:48:23Z |
python
| 2020-05-18T19:09:42Z |
test/units/galaxy/test_collection.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
import pytest
import re
import tarfile
import uuid
from hashlib import sha256
from io import BytesIO
from units.compat.mock import MagicMock, mock_open, patch
from ansible import context
from ansible.cli.galaxy import GalaxyCLI
from ansible.errors import AnsibleError
from ansible.galaxy import api, collection, token
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.six.moves import builtins
from ansible.utils import context_objects as co
from ansible.utils.display import Display
from ansible.utils.hashing import secure_hash_s
@pytest.fixture(autouse='function')
def reset_cli_args():
co.GlobalCLIArgs._Singleton__instance = None
yield
co.GlobalCLIArgs._Singleton__instance = None
@pytest.fixture()
def collection_input(tmp_path_factory):
''' Creates a collection skeleton directory for build tests '''
test_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
namespace = 'ansible_namespace'
collection = 'collection'
skeleton = os.path.join(os.path.dirname(os.path.split(__file__)[0]), 'cli', 'test_data', 'collection_skeleton')
galaxy_args = ['ansible-galaxy', 'collection', 'init', '%s.%s' % (namespace, collection),
'-c', '--init-path', test_dir, '--collection-skeleton', skeleton]
GalaxyCLI(args=galaxy_args).run()
collection_dir = os.path.join(test_dir, namespace, collection)
output_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Output'))
return collection_dir, output_dir
@pytest.fixture()
def collection_artifact(monkeypatch, tmp_path_factory):
''' Creates a temp collection artifact and mocked open_url instance for publishing tests '''
mock_open = MagicMock()
monkeypatch.setattr(collection, 'open_url', mock_open)
mock_uuid = MagicMock()
mock_uuid.return_value.hex = 'uuid'
monkeypatch.setattr(uuid, 'uuid4', mock_uuid)
tmp_path = tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections')
input_file = to_text(tmp_path / 'collection.tar.gz')
with tarfile.open(input_file, 'w:gz') as tfile:
b_io = BytesIO(b"\x00\x01\x02\x03")
tar_info = tarfile.TarInfo('test')
tar_info.size = 4
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
return input_file, mock_open
@pytest.fixture()
def galaxy_yml(request, tmp_path_factory):
b_test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections'))
b_galaxy_yml = os.path.join(b_test_dir, b'galaxy.yml')
with open(b_galaxy_yml, 'wb') as galaxy_obj:
galaxy_obj.write(to_bytes(request.param))
yield b_galaxy_yml
@pytest.fixture()
def tmp_tarfile(tmp_path_factory, manifest_info):
''' Creates a temporary tar file for _extract_tar_file tests '''
filename = u'ÅÑŚÌβŁÈ'
temp_dir = to_bytes(tmp_path_factory.mktemp('test-%s Collections' % to_native(filename)))
tar_file = os.path.join(temp_dir, to_bytes('%s.tar.gz' % filename))
data = os.urandom(8)
with tarfile.open(tar_file, 'w:gz') as tfile:
b_io = BytesIO(data)
tar_info = tarfile.TarInfo(filename)
tar_info.size = len(data)
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
b_data = to_bytes(json.dumps(manifest_info, indent=True), errors='surrogate_or_strict')
b_io = BytesIO(b_data)
tar_info = tarfile.TarInfo('MANIFEST.json')
tar_info.size = len(b_data)
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
sha256_hash = sha256()
sha256_hash.update(data)
with tarfile.open(tar_file, 'r') as tfile:
yield temp_dir, tfile, filename, sha256_hash.hexdigest()
@pytest.fixture()
def galaxy_server():
context.CLIARGS._store = {'ignore_certs': False}
galaxy_api = api.GalaxyAPI(None, 'test_server', 'https://galaxy.ansible.com',
token=token.GalaxyToken(token='key'))
return galaxy_api
@pytest.fixture()
def manifest_template():
def get_manifest_info(namespace='ansible_namespace', name='collection', version='0.1.0'):
return {
"collection_info": {
"namespace": namespace,
"name": name,
"version": version,
"authors": [
"shertel"
],
"readme": "README.md",
"tags": [
"test",
"collection"
],
"description": "Test",
"license": [
"MIT"
],
"license_file": None,
"dependencies": {},
"repository": "https://github.com/{0}/{1}".format(namespace, name),
"documentation": None,
"homepage": None,
"issues": None
},
"file_manifest_file": {
"name": "FILES.json",
"ftype": "file",
"chksum_type": "sha256",
"chksum_sha256": "files_manifest_checksum",
"format": 1
},
"format": 1
}
return get_manifest_info
@pytest.fixture()
def manifest_info(manifest_template):
return manifest_template()
@pytest.fixture()
def files_manifest_info():
return {
"files": [
{
"name": ".",
"ftype": "dir",
"chksum_type": None,
"chksum_sha256": None,
"format": 1
},
{
"name": "README.md",
"ftype": "file",
"chksum_type": "sha256",
"chksum_sha256": "individual_file_checksum",
"format": 1
}
],
"format": 1}
@pytest.fixture()
def manifest(manifest_info):
b_data = to_bytes(json.dumps(manifest_info))
with patch.object(builtins, 'open', mock_open(read_data=b_data)) as m:
with open('MANIFEST.json', mode='rb') as fake_file:
yield fake_file, sha256(b_data).hexdigest()
@pytest.fixture()
def mock_collection(galaxy_server):
def create_mock_collection(namespace='ansible_namespace', name='collection', version='0.1.0', local=True, local_installed=True):
b_path = None
force = False
if local:
mock_collection = collection.CollectionRequirement(namespace, name, b_path, galaxy_server, [version], version, force, skip=local_installed)
else:
download_url = 'https://galaxy.ansible.com/download/{0}-{1}-{2}.tar.gz'.format(namespace, name, version)
digest = '19415a6a6df831df61cffde4a09d1d89ac8d8ca5c0586e85bea0b106d6dff29a'
dependencies = {}
metadata = api.CollectionVersionMetadata(namespace, name, version, download_url, digest, dependencies)
mock_collection = collection.CollectionRequirement(namespace, name, b_path, galaxy_server, [version], version, force, metadata=metadata)
return mock_collection
return create_mock_collection
def test_build_collection_no_galaxy_yaml():
fake_path = u'/fake/ÅÑŚÌβŁÈ/path'
expected = to_native("The collection galaxy.yml path '%s/galaxy.yml' does not exist." % fake_path)
with pytest.raises(AnsibleError, match=expected):
collection.build_collection(fake_path, 'output', False)
def test_build_existing_output_file(collection_input):
input_dir, output_dir = collection_input
existing_output_dir = os.path.join(output_dir, 'ansible_namespace-collection-0.1.0.tar.gz')
os.makedirs(existing_output_dir)
expected = "The output collection artifact '%s' already exists, but is a directory - aborting" \
% to_native(existing_output_dir)
with pytest.raises(AnsibleError, match=expected):
collection.build_collection(input_dir, output_dir, False)
def test_build_existing_output_without_force(collection_input):
input_dir, output_dir = collection_input
existing_output = os.path.join(output_dir, 'ansible_namespace-collection-0.1.0.tar.gz')
with open(existing_output, 'w+') as out_file:
out_file.write("random garbage")
out_file.flush()
expected = "The file '%s' already exists. You can use --force to re-create the collection artifact." \
% to_native(existing_output)
with pytest.raises(AnsibleError, match=expected):
collection.build_collection(input_dir, output_dir, False)
def test_build_existing_output_with_force(collection_input):
input_dir, output_dir = collection_input
existing_output = os.path.join(output_dir, 'ansible_namespace-collection-0.1.0.tar.gz')
with open(existing_output, 'w+') as out_file:
out_file.write("random garbage")
out_file.flush()
collection.build_collection(input_dir, output_dir, True)
# Verify the file was replaced with an actual tar file
assert tarfile.is_tarfile(existing_output)
@pytest.mark.parametrize('galaxy_yml', [b'namespace: value: broken'], indirect=True)
def test_invalid_yaml_galaxy_file(galaxy_yml):
expected = to_native(b"Failed to parse the galaxy.yml at '%s' with the following error:" % galaxy_yml)
with pytest.raises(AnsibleError, match=expected):
collection._get_galaxy_yml(galaxy_yml)
@pytest.mark.parametrize('galaxy_yml', [b'namespace: test_namespace'], indirect=True)
def test_missing_required_galaxy_key(galaxy_yml):
expected = "The collection galaxy.yml at '%s' is missing the following mandatory keys: authors, name, " \
"readme, version" % to_native(galaxy_yml)
with pytest.raises(AnsibleError, match=expected):
collection._get_galaxy_yml(galaxy_yml)
@pytest.mark.parametrize('galaxy_yml', [b"""
namespace: namespace
name: collection
authors: Jordan
version: 0.1.0
readme: README.md
invalid: value"""], indirect=True)
def test_warning_extra_keys(galaxy_yml, monkeypatch):
display_mock = MagicMock()
monkeypatch.setattr(Display, 'warning', display_mock)
collection._get_galaxy_yml(galaxy_yml)
assert display_mock.call_count == 1
assert display_mock.call_args[0][0] == "Found unknown keys in collection galaxy.yml at '%s': invalid"\
% to_text(galaxy_yml)
@pytest.mark.parametrize('galaxy_yml', [b"""
namespace: namespace
name: collection
authors: Jordan
version: 0.1.0
readme: README.md"""], indirect=True)
def test_defaults_galaxy_yml(galaxy_yml):
actual = collection._get_galaxy_yml(galaxy_yml)
assert actual['namespace'] == 'namespace'
assert actual['name'] == 'collection'
assert actual['authors'] == ['Jordan']
assert actual['version'] == '0.1.0'
assert actual['readme'] == 'README.md'
assert actual['description'] is None
assert actual['repository'] is None
assert actual['documentation'] is None
assert actual['homepage'] is None
assert actual['issues'] is None
assert actual['tags'] == []
assert actual['dependencies'] == {}
assert actual['license_ids'] == []
@pytest.mark.parametrize('galaxy_yml', [(b"""
namespace: namespace
name: collection
authors: Jordan
version: 0.1.0
readme: README.md
license: MIT"""), (b"""
namespace: namespace
name: collection
authors: Jordan
version: 0.1.0
readme: README.md
license:
- MIT""")], indirect=True)
def test_galaxy_yml_list_value(galaxy_yml):
actual = collection._get_galaxy_yml(galaxy_yml)
assert actual['license_ids'] == ['MIT']
def test_build_ignore_files_and_folders(collection_input, monkeypatch):
input_dir = collection_input[0]
mock_display = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_display)
git_folder = os.path.join(input_dir, '.git')
retry_file = os.path.join(input_dir, 'ansible.retry')
tests_folder = os.path.join(input_dir, 'tests', 'output')
tests_output_file = os.path.join(tests_folder, 'result.txt')
os.makedirs(git_folder)
os.makedirs(tests_folder)
with open(retry_file, 'w+') as ignore_file:
ignore_file.write('random')
ignore_file.flush()
with open(tests_output_file, 'w+') as tests_file:
tests_file.write('random')
tests_file.flush()
actual = collection._build_files_manifest(to_bytes(input_dir), 'namespace', 'collection', [])
assert actual['format'] == 1
for manifest_entry in actual['files']:
assert manifest_entry['name'] not in ['.git', 'ansible.retry', 'galaxy.yml', 'tests/output', 'tests/output/result.txt']
expected_msgs = [
"Skipping '%s/galaxy.yml' for collection build" % to_text(input_dir),
"Skipping '%s' for collection build" % to_text(retry_file),
"Skipping '%s' for collection build" % to_text(git_folder),
"Skipping '%s' for collection build" % to_text(tests_folder),
]
assert mock_display.call_count == 4
assert mock_display.mock_calls[0][1][0] in expected_msgs
assert mock_display.mock_calls[1][1][0] in expected_msgs
assert mock_display.mock_calls[2][1][0] in expected_msgs
assert mock_display.mock_calls[3][1][0] in expected_msgs
def test_build_ignore_older_release_in_root(collection_input, monkeypatch):
input_dir = collection_input[0]
mock_display = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_display)
# This is expected to be ignored because it is in the root collection dir.
release_file = os.path.join(input_dir, 'namespace-collection-0.0.0.tar.gz')
# This is not expected to be ignored because it is not in the root collection dir.
fake_release_file = os.path.join(input_dir, 'plugins', 'namespace-collection-0.0.0.tar.gz')
for filename in [release_file, fake_release_file]:
with open(filename, 'w+') as file_obj:
file_obj.write('random')
file_obj.flush()
actual = collection._build_files_manifest(to_bytes(input_dir), 'namespace', 'collection', [])
assert actual['format'] == 1
plugin_release_found = False
for manifest_entry in actual['files']:
assert manifest_entry['name'] != 'namespace-collection-0.0.0.tar.gz'
if manifest_entry['name'] == 'plugins/namespace-collection-0.0.0.tar.gz':
plugin_release_found = True
assert plugin_release_found
expected_msgs = [
"Skipping '%s/galaxy.yml' for collection build" % to_text(input_dir),
"Skipping '%s' for collection build" % to_text(release_file)
]
assert mock_display.call_count == 2
assert mock_display.mock_calls[0][1][0] in expected_msgs
assert mock_display.mock_calls[1][1][0] in expected_msgs
def test_build_ignore_patterns(collection_input, monkeypatch):
input_dir = collection_input[0]
mock_display = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_display)
actual = collection._build_files_manifest(to_bytes(input_dir), 'namespace', 'collection',
['*.md', 'plugins/action', 'playbooks/*.j2'])
assert actual['format'] == 1
expected_missing = [
'README.md',
'docs/My Collection.md',
'plugins/action',
'playbooks/templates/test.conf.j2',
'playbooks/templates/subfolder/test.conf.j2',
]
# Files or dirs that are close to a match but are not, make sure they are present
expected_present = [
'docs',
'roles/common/templates/test.conf.j2',
'roles/common/templates/subfolder/test.conf.j2',
]
actual_files = [e['name'] for e in actual['files']]
for m in expected_missing:
assert m not in actual_files
for p in expected_present:
assert p in actual_files
expected_msgs = [
"Skipping '%s/galaxy.yml' for collection build" % to_text(input_dir),
"Skipping '%s/README.md' for collection build" % to_text(input_dir),
"Skipping '%s/docs/My Collection.md' for collection build" % to_text(input_dir),
"Skipping '%s/plugins/action' for collection build" % to_text(input_dir),
"Skipping '%s/playbooks/templates/test.conf.j2' for collection build" % to_text(input_dir),
"Skipping '%s/playbooks/templates/subfolder/test.conf.j2' for collection build" % to_text(input_dir),
]
assert mock_display.call_count == len(expected_msgs)
assert mock_display.mock_calls[0][1][0] in expected_msgs
assert mock_display.mock_calls[1][1][0] in expected_msgs
assert mock_display.mock_calls[2][1][0] in expected_msgs
assert mock_display.mock_calls[3][1][0] in expected_msgs
assert mock_display.mock_calls[4][1][0] in expected_msgs
assert mock_display.mock_calls[5][1][0] in expected_msgs
def test_build_ignore_symlink_target_outside_collection(collection_input, monkeypatch):
input_dir, outside_dir = collection_input
mock_display = MagicMock()
monkeypatch.setattr(Display, 'warning', mock_display)
link_path = os.path.join(input_dir, 'plugins', 'connection')
os.symlink(outside_dir, link_path)
actual = collection._build_files_manifest(to_bytes(input_dir), 'namespace', 'collection', [])
for manifest_entry in actual['files']:
assert manifest_entry['name'] != 'plugins/connection'
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == "Skipping '%s' as it is a symbolic link to a directory outside " \
"the collection" % to_text(link_path)
def test_build_copy_symlink_target_inside_collection(collection_input):
input_dir = collection_input[0]
os.makedirs(os.path.join(input_dir, 'playbooks', 'roles'))
roles_link = os.path.join(input_dir, 'playbooks', 'roles', 'linked')
roles_target = os.path.join(input_dir, 'roles', 'linked')
roles_target_tasks = os.path.join(roles_target, 'tasks')
os.makedirs(roles_target_tasks)
with open(os.path.join(roles_target_tasks, 'main.yml'), 'w+') as tasks_main:
tasks_main.write("---\n- hosts: localhost\n tasks:\n - ping:")
tasks_main.flush()
os.symlink(roles_target, roles_link)
actual = collection._build_files_manifest(to_bytes(input_dir), 'namespace', 'collection', [])
linked_entries = [e for e in actual['files'] if e['name'].startswith('playbooks/roles/linked')]
assert len(linked_entries) == 3
assert linked_entries[0]['name'] == 'playbooks/roles/linked'
assert linked_entries[0]['ftype'] == 'dir'
assert linked_entries[1]['name'] == 'playbooks/roles/linked/tasks'
assert linked_entries[1]['ftype'] == 'dir'
assert linked_entries[2]['name'] == 'playbooks/roles/linked/tasks/main.yml'
assert linked_entries[2]['ftype'] == 'file'
assert linked_entries[2]['chksum_sha256'] == '9c97a1633c51796999284c62236b8d5462903664640079b80c37bf50080fcbc3'
def test_build_with_symlink_inside_collection(collection_input):
input_dir, output_dir = collection_input
os.makedirs(os.path.join(input_dir, 'playbooks', 'roles'))
roles_link = os.path.join(input_dir, 'playbooks', 'roles', 'linked')
file_link = os.path.join(input_dir, 'docs', 'README.md')
roles_target = os.path.join(input_dir, 'roles', 'linked')
roles_target_tasks = os.path.join(roles_target, 'tasks')
os.makedirs(roles_target_tasks)
with open(os.path.join(roles_target_tasks, 'main.yml'), 'w+') as tasks_main:
tasks_main.write("---\n- hosts: localhost\n tasks:\n - ping:")
tasks_main.flush()
os.symlink(roles_target, roles_link)
os.symlink(os.path.join(input_dir, 'README.md'), file_link)
collection.build_collection(input_dir, output_dir, False)
output_artifact = os.path.join(output_dir, 'ansible_namespace-collection-0.1.0.tar.gz')
assert tarfile.is_tarfile(output_artifact)
with tarfile.open(output_artifact, mode='r') as actual:
members = actual.getmembers()
linked_members = [m for m in members if m.path.startswith('playbooks/roles/linked/tasks')]
assert len(linked_members) == 2
assert linked_members[0].name == 'playbooks/roles/linked/tasks'
assert linked_members[0].isdir()
assert linked_members[1].name == 'playbooks/roles/linked/tasks/main.yml'
assert linked_members[1].isreg()
linked_task = actual.extractfile(linked_members[1].name)
actual_task = secure_hash_s(linked_task.read())
linked_task.close()
assert actual_task == 'f4dcc52576b6c2cd8ac2832c52493881c4e54226'
linked_file = [m for m in members if m.path == 'docs/README.md']
assert len(linked_file) == 1
assert linked_file[0].isreg()
linked_file_obj = actual.extractfile(linked_file[0].name)
actual_file = secure_hash_s(linked_file_obj.read())
linked_file_obj.close()
assert actual_file == '63444bfc766154e1bc7557ef6280de20d03fcd81'
def test_publish_no_wait(galaxy_server, collection_artifact, monkeypatch):
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
artifact_path, mock_open = collection_artifact
fake_import_uri = 'https://galaxy.server.com/api/v2/import/1234'
mock_publish = MagicMock()
mock_publish.return_value = fake_import_uri
monkeypatch.setattr(galaxy_server, 'publish_collection', mock_publish)
collection.publish_collection(artifact_path, galaxy_server, False, 0)
assert mock_publish.call_count == 1
assert mock_publish.mock_calls[0][1][0] == artifact_path
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == \
"Collection has been pushed to the Galaxy server %s %s, not waiting until import has completed due to " \
"--no-wait being set. Import task results can be found at %s" % (galaxy_server.name, galaxy_server.api_server,
fake_import_uri)
def test_publish_with_wait(galaxy_server, collection_artifact, monkeypatch):
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
artifact_path, mock_open = collection_artifact
fake_import_uri = 'https://galaxy.server.com/api/v2/import/1234'
mock_publish = MagicMock()
mock_publish.return_value = fake_import_uri
monkeypatch.setattr(galaxy_server, 'publish_collection', mock_publish)
mock_wait = MagicMock()
monkeypatch.setattr(galaxy_server, 'wait_import_task', mock_wait)
collection.publish_collection(artifact_path, galaxy_server, True, 0)
assert mock_publish.call_count == 1
assert mock_publish.mock_calls[0][1][0] == artifact_path
assert mock_wait.call_count == 1
assert mock_wait.mock_calls[0][1][0] == '1234'
assert mock_display.mock_calls[0][1][0] == "Collection has been published to the Galaxy server test_server %s" \
% galaxy_server.api_server
def test_find_existing_collections(tmp_path_factory, monkeypatch):
test_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections'))
collection1 = os.path.join(test_dir, 'namespace1', 'collection1')
collection2 = os.path.join(test_dir, 'namespace2', 'collection2')
fake_collection1 = os.path.join(test_dir, 'namespace3', 'collection3')
fake_collection2 = os.path.join(test_dir, 'namespace4')
os.makedirs(collection1)
os.makedirs(collection2)
os.makedirs(os.path.split(fake_collection1)[0])
open(fake_collection1, 'wb+').close()
open(fake_collection2, 'wb+').close()
collection1_manifest = json.dumps({
'collection_info': {
'namespace': 'namespace1',
'name': 'collection1',
'version': '1.2.3',
'authors': ['Jordan Borean'],
'readme': 'README.md',
'dependencies': {},
},
'format': 1,
})
with open(os.path.join(collection1, 'MANIFEST.json'), 'wb') as manifest_obj:
manifest_obj.write(to_bytes(collection1_manifest))
mock_warning = MagicMock()
monkeypatch.setattr(Display, 'warning', mock_warning)
actual = collection.find_existing_collections(test_dir)
assert len(actual) == 2
for actual_collection in actual:
assert actual_collection.skip is True
if str(actual_collection) == 'namespace1.collection1':
assert actual_collection.namespace == 'namespace1'
assert actual_collection.name == 'collection1'
assert actual_collection.b_path == to_bytes(collection1)
assert actual_collection.api is None
assert actual_collection.versions == set(['1.2.3'])
assert actual_collection.latest_version == '1.2.3'
assert actual_collection.dependencies == {}
else:
assert actual_collection.namespace == 'namespace2'
assert actual_collection.name == 'collection2'
assert actual_collection.b_path == to_bytes(collection2)
assert actual_collection.api is None
assert actual_collection.versions == set(['*'])
assert actual_collection.latest_version == '*'
assert actual_collection.dependencies == {}
assert mock_warning.call_count == 1
assert mock_warning.mock_calls[0][1][0] == "Collection at '%s' does not have a MANIFEST.json file, cannot " \
"detect version." % to_text(collection2)
def test_download_file(tmp_path_factory, monkeypatch):
temp_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections'))
data = b"\x00\x01\x02\x03"
sha256_hash = sha256()
sha256_hash.update(data)
mock_open = MagicMock()
mock_open.return_value = BytesIO(data)
monkeypatch.setattr(collection, 'open_url', mock_open)
expected = os.path.join(temp_dir, b'file')
actual = collection._download_file('http://google.com/file', temp_dir, sha256_hash.hexdigest(), True)
assert actual.startswith(expected)
assert os.path.isfile(actual)
with open(actual, 'rb') as file_obj:
assert file_obj.read() == data
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == 'http://google.com/file'
def test_download_file_hash_mismatch(tmp_path_factory, monkeypatch):
temp_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections'))
data = b"\x00\x01\x02\x03"
mock_open = MagicMock()
mock_open.return_value = BytesIO(data)
monkeypatch.setattr(collection, 'open_url', mock_open)
expected = "Mismatch artifact hash with downloaded file"
with pytest.raises(AnsibleError, match=expected):
collection._download_file('http://google.com/file', temp_dir, 'bad', True)
def test_extract_tar_file_invalid_hash(tmp_tarfile):
temp_dir, tfile, filename, dummy = tmp_tarfile
expected = "Checksum mismatch for '%s' inside collection at '%s'" % (to_native(filename), to_native(tfile.name))
with pytest.raises(AnsibleError, match=expected):
collection._extract_tar_file(tfile, filename, temp_dir, temp_dir, "fakehash")
def test_extract_tar_file_missing_member(tmp_tarfile):
temp_dir, tfile, dummy, dummy = tmp_tarfile
expected = "Collection tar at '%s' does not contain the expected file 'missing'." % to_native(tfile.name)
with pytest.raises(AnsibleError, match=expected):
collection._extract_tar_file(tfile, 'missing', temp_dir, temp_dir)
def test_extract_tar_file_missing_parent_dir(tmp_tarfile):
temp_dir, tfile, filename, checksum = tmp_tarfile
output_dir = os.path.join(temp_dir, b'output')
output_file = os.path.join(output_dir, to_bytes(filename))
collection._extract_tar_file(tfile, filename, output_dir, temp_dir, checksum)
os.path.isfile(output_file)
def test_extract_tar_file_outside_dir(tmp_path_factory):
filename = u'ÅÑŚÌβŁÈ'
temp_dir = to_bytes(tmp_path_factory.mktemp('test-%s Collections' % to_native(filename)))
tar_file = os.path.join(temp_dir, to_bytes('%s.tar.gz' % filename))
data = os.urandom(8)
tar_filename = '../%s.sh' % filename
with tarfile.open(tar_file, 'w:gz') as tfile:
b_io = BytesIO(data)
tar_info = tarfile.TarInfo(tar_filename)
tar_info.size = len(data)
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
expected = re.escape("Cannot extract tar entry '%s' as it will be placed outside the collection directory"
% to_native(tar_filename))
with tarfile.open(tar_file, 'r') as tfile:
with pytest.raises(AnsibleError, match=expected):
collection._extract_tar_file(tfile, tar_filename, os.path.join(temp_dir, to_bytes(filename)), temp_dir)
def test_require_one_of_collections_requirements_with_both():
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'verify', 'namespace.collection', '-r', 'requirements.yml'])
with pytest.raises(AnsibleError) as req_err:
cli._require_one_of_collections_requirements(('namespace.collection',), 'requirements.yml')
with pytest.raises(AnsibleError) as cli_err:
cli.run()
assert req_err.value.message == cli_err.value.message == 'The positional collection_name arg and --requirements-file are mutually exclusive.'
def test_require_one_of_collections_requirements_with_neither():
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'verify'])
with pytest.raises(AnsibleError) as req_err:
cli._require_one_of_collections_requirements((), '')
with pytest.raises(AnsibleError) as cli_err:
cli.run()
assert req_err.value.message == cli_err.value.message == 'You must specify a collection name or a requirements file.'
def test_require_one_of_collections_requirements_with_collections():
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'verify', 'namespace1.collection1', 'namespace2.collection1:1.0.0'])
collections = ('namespace1.collection1', 'namespace2.collection1:1.0.0',)
requirements = cli._require_one_of_collections_requirements(collections, '')
assert requirements == [('namespace1.collection1', '*', None), ('namespace2.collection1', '1.0.0', None)]
@patch('ansible.cli.galaxy.GalaxyCLI._parse_requirements_file')
def test_require_one_of_collections_requirements_with_requirements(mock_parse_requirements_file, galaxy_server):
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'verify', '-r', 'requirements.yml', 'namespace.collection'])
mock_parse_requirements_file.return_value = {'collections': [('namespace.collection', '1.0.5', galaxy_server)]}
requirements = cli._require_one_of_collections_requirements((), 'requirements.yml')
assert mock_parse_requirements_file.call_count == 1
assert requirements == [('namespace.collection', '1.0.5', galaxy_server)]
@patch('ansible.cli.galaxy.GalaxyCLI.execute_verify', spec=True)
def test_call_GalaxyCLI(execute_verify):
galaxy_args = ['ansible-galaxy', 'collection', 'verify', 'namespace.collection']
GalaxyCLI(args=galaxy_args).run()
assert execute_verify.call_count == 1
@patch('ansible.cli.galaxy.GalaxyCLI.execute_verify')
def test_call_GalaxyCLI_with_implicit_role(execute_verify):
galaxy_args = ['ansible-galaxy', 'verify', 'namespace.implicit_role']
with pytest.raises(SystemExit):
GalaxyCLI(args=galaxy_args).run()
assert not execute_verify.called
@patch('ansible.cli.galaxy.GalaxyCLI.execute_verify')
def test_call_GalaxyCLI_with_role(execute_verify):
galaxy_args = ['ansible-galaxy', 'role', 'verify', 'namespace.role']
with pytest.raises(SystemExit):
GalaxyCLI(args=galaxy_args).run()
assert not execute_verify.called
@patch('ansible.cli.galaxy.verify_collections', spec=True)
def test_execute_verify_with_defaults(mock_verify_collections):
galaxy_args = ['ansible-galaxy', 'collection', 'verify', 'namespace.collection:1.0.4']
GalaxyCLI(args=galaxy_args).run()
assert mock_verify_collections.call_count == 1
requirements, search_paths, galaxy_apis, validate, ignore_errors = mock_verify_collections.call_args[0]
assert requirements == [('namespace.collection', '1.0.4', None)]
for install_path in search_paths:
assert install_path.endswith('ansible_collections')
assert galaxy_apis[0].api_server == 'https://galaxy.ansible.com'
assert validate is True
assert ignore_errors is False
@patch('ansible.cli.galaxy.verify_collections', spec=True)
def test_execute_verify(mock_verify_collections):
GalaxyCLI(args=[
'ansible-galaxy', 'collection', 'verify', 'namespace.collection:1.0.4', '--ignore-certs',
'-p', '~/.ansible', '--ignore-errors', '--server', 'http://galaxy-dev.com',
]).run()
assert mock_verify_collections.call_count == 1
requirements, search_paths, galaxy_apis, validate, ignore_errors = mock_verify_collections.call_args[0]
assert requirements == [('namespace.collection', '1.0.4', None)]
for install_path in search_paths:
assert install_path.endswith('ansible_collections')
assert galaxy_apis[0].api_server == 'http://galaxy-dev.com'
assert validate is False
assert ignore_errors is True
def test_verify_file_hash_deleted_file(manifest_info):
data = to_bytes(json.dumps(manifest_info))
digest = sha256(data).hexdigest()
namespace = manifest_info['collection_info']['namespace']
name = manifest_info['collection_info']['name']
version = manifest_info['collection_info']['version']
server = 'http://galaxy.ansible.com'
error_queue = []
with patch.object(builtins, 'open', mock_open(read_data=data)) as m:
with patch.object(collection.os.path, 'isfile', MagicMock(return_value=False)) as mock_isfile:
collection_req = collection.CollectionRequirement(namespace, name, './', server, [version], version, False)
collection_req._verify_file_hash(b'path/', 'file', digest, error_queue)
assert mock_isfile.called_once
assert len(error_queue) == 1
assert error_queue[0].installed is None
assert error_queue[0].expected == digest
def test_verify_file_hash_matching_hash(manifest_info):
data = to_bytes(json.dumps(manifest_info))
digest = sha256(data).hexdigest()
namespace = manifest_info['collection_info']['namespace']
name = manifest_info['collection_info']['name']
version = manifest_info['collection_info']['version']
server = 'http://galaxy.ansible.com'
error_queue = []
with patch.object(builtins, 'open', mock_open(read_data=data)) as m:
with patch.object(collection.os.path, 'isfile', MagicMock(return_value=True)) as mock_isfile:
collection_req = collection.CollectionRequirement(namespace, name, './', server, [version], version, False)
collection_req._verify_file_hash(b'path/', 'file', digest, error_queue)
assert mock_isfile.called_once
assert error_queue == []
def test_verify_file_hash_mismatching_hash(manifest_info):
data = to_bytes(json.dumps(manifest_info))
digest = sha256(data).hexdigest()
different_digest = 'not_{0}'.format(digest)
namespace = manifest_info['collection_info']['namespace']
name = manifest_info['collection_info']['name']
version = manifest_info['collection_info']['version']
server = 'http://galaxy.ansible.com'
error_queue = []
with patch.object(builtins, 'open', mock_open(read_data=data)) as m:
with patch.object(collection.os.path, 'isfile', MagicMock(return_value=True)) as mock_isfile:
collection_req = collection.CollectionRequirement(namespace, name, './', server, [version], version, False)
collection_req._verify_file_hash(b'path/', 'file', different_digest, error_queue)
assert mock_isfile.called_once
assert len(error_queue) == 1
assert error_queue[0].installed == digest
assert error_queue[0].expected == different_digest
def test_consume_file(manifest):
manifest_file, checksum = manifest
assert checksum == collection._consume_file(manifest_file)
def test_consume_file_and_write_contents(manifest, manifest_info):
manifest_file, checksum = manifest
write_to = BytesIO()
actual_hash = collection._consume_file(manifest_file, write_to)
write_to.seek(0)
assert to_bytes(json.dumps(manifest_info)) == write_to.read()
assert actual_hash == checksum
def test_get_tar_file_member(tmp_tarfile):
temp_dir, tfile, filename, checksum = tmp_tarfile
with collection._get_tar_file_member(tfile, filename) as tar_file_obj:
assert isinstance(tar_file_obj, tarfile.ExFileObject)
def test_get_nonexistent_tar_file_member(tmp_tarfile):
temp_dir, tfile, filename, checksum = tmp_tarfile
file_does_not_exist = filename + 'nonexistent'
with pytest.raises(AnsibleError) as err:
collection._get_tar_file_member(tfile, file_does_not_exist)
assert to_text(err.value.message) == "Collection tar at '%s' does not contain the expected file '%s'." % (to_text(tfile.name), file_does_not_exist)
def test_get_tar_file_hash(tmp_tarfile):
temp_dir, tfile, filename, checksum = tmp_tarfile
assert checksum == collection._get_tar_file_hash(tfile.name, filename)
def test_get_json_from_tar_file(tmp_tarfile):
temp_dir, tfile, filename, checksum = tmp_tarfile
assert 'MANIFEST.json' in tfile.getnames()
data = collection._get_json_from_tar_file(tfile.name, 'MANIFEST.json')
assert isinstance(data, dict)
def test_verify_collection_not_installed(mock_collection):
local_collection = mock_collection(local_installed=False)
remote_collection = mock_collection(local=False)
with patch.object(collection.display, 'display') as mocked_display:
local_collection.verify(remote_collection, './', './')
assert mocked_display.called
assert mocked_display.call_args[0][0] == "'%s.%s' has not been installed, nothing to verify" % (local_collection.namespace, local_collection.name)
def test_verify_successful_debug_info(monkeypatch, mock_collection):
local_collection = mock_collection()
remote_collection = mock_collection(local=False)
monkeypatch.setattr(collection, '_get_tar_file_hash', MagicMock())
monkeypatch.setattr(collection.CollectionRequirement, '_verify_file_hash', MagicMock())
monkeypatch.setattr(collection, '_get_json_from_tar_file', MagicMock())
with patch.object(collection.display, 'vvv') as mock_display:
local_collection.verify(remote_collection, './', './')
namespace = local_collection.namespace
name = local_collection.name
version = local_collection.latest_version
assert mock_display.call_count == 4
assert mock_display.call_args_list[0][0][0] == "Verifying '%s.%s:%s'." % (namespace, name, version)
assert mock_display.call_args_list[1][0][0] == "Installed collection found at './%s/%s'" % (namespace, name)
located = "Remote collection found at 'https://galaxy.ansible.com/download/%s-%s-%s.tar.gz'" % (namespace, name, version)
assert mock_display.call_args_list[2][0][0] == located
verified = "Successfully verified that checksums for '%s.%s:%s' match the remote collection" % (namespace, name, version)
assert mock_display.call_args_list[3][0][0] == verified
def test_verify_different_versions(mock_collection):
local_collection = mock_collection(version='0.1.0')
remote_collection = mock_collection(local=False, version='3.0.0')
with patch.object(collection.display, 'display') as mock_display:
local_collection.verify(remote_collection, './', './')
namespace = local_collection.namespace
name = local_collection.name
installed_version = local_collection.latest_version
compared_version = remote_collection.latest_version
msg = "%s.%s has the version '%s' but is being compared to '%s'" % (namespace, name, installed_version, compared_version)
assert mock_display.call_count == 1
assert mock_display.call_args[0][0] == msg
@patch.object(builtins, 'open', mock_open())
def test_verify_modified_manifest(monkeypatch, mock_collection, manifest_info):
local_collection = mock_collection()
remote_collection = mock_collection(local=False)
monkeypatch.setattr(collection, '_get_tar_file_hash', MagicMock(side_effect=['manifest_checksum']))
monkeypatch.setattr(collection, '_consume_file', MagicMock(side_effect=['manifest_checksum_modified', 'files_manifest_checksum']))
monkeypatch.setattr(collection, '_get_json_from_tar_file', MagicMock(side_effect=[manifest_info, {'files': []}]))
monkeypatch.setattr(collection.os.path, 'isfile', MagicMock(return_value=True))
with patch.object(collection.display, 'display') as mock_display:
with patch.object(collection.display, 'vvv') as mock_debug:
local_collection.verify(remote_collection, './', './')
namespace = local_collection.namespace
name = local_collection.name
assert mock_display.call_count == 3
assert mock_display.call_args_list[0][0][0] == 'Collection %s.%s contains modified content in the following files:' % (namespace, name)
assert mock_display.call_args_list[1][0][0] == '%s.%s' % (namespace, name)
assert mock_display.call_args_list[2][0][0] == ' MANIFEST.json'
# The -vvv output should show details (the checksums do not match)
assert mock_debug.call_count == 5
assert mock_debug.call_args_list[-1][0][0] == ' Expected: manifest_checksum\n Found: manifest_checksum_modified'
@patch.object(builtins, 'open', mock_open())
def test_verify_modified_files_manifest(monkeypatch, mock_collection, manifest_info):
local_collection = mock_collection()
remote_collection = mock_collection(local=False)
monkeypatch.setattr(collection, '_get_tar_file_hash', MagicMock(side_effect=['manifest_checksum']))
monkeypatch.setattr(collection, '_consume_file', MagicMock(side_effect=['manifest_checksum', 'files_manifest_checksum_modified']))
monkeypatch.setattr(collection, '_get_json_from_tar_file', MagicMock(side_effect=[manifest_info, {'files': []}]))
monkeypatch.setattr(collection.os.path, 'isfile', MagicMock(return_value=True))
with patch.object(collection.display, 'display') as mock_display:
with patch.object(collection.display, 'vvv') as mock_debug:
local_collection.verify(remote_collection, './', './')
namespace = local_collection.namespace
name = local_collection.name
assert mock_display.call_count == 3
assert mock_display.call_args_list[0][0][0] == 'Collection %s.%s contains modified content in the following files:' % (namespace, name)
assert mock_display.call_args_list[1][0][0] == '%s.%s' % (namespace, name)
assert mock_display.call_args_list[2][0][0] == ' FILES.json'
# The -vvv output should show details (the checksums do not match)
assert mock_debug.call_count == 5
assert mock_debug.call_args_list[-1][0][0] == ' Expected: files_manifest_checksum\n Found: files_manifest_checksum_modified'
@patch.object(builtins, 'open', mock_open())
def test_verify_modified_files(monkeypatch, mock_collection, manifest_info, files_manifest_info):
local_collection = mock_collection()
remote_collection = mock_collection(local=False)
monkeypatch.setattr(collection, '_get_tar_file_hash', MagicMock(side_effect=['manifest_checksum']))
fakehashes = ['manifest_checksum', 'files_manifest_checksum', 'individual_file_checksum_modified']
monkeypatch.setattr(collection, '_consume_file', MagicMock(side_effect=fakehashes))
monkeypatch.setattr(collection, '_get_json_from_tar_file', MagicMock(side_effect=[manifest_info, files_manifest_info]))
monkeypatch.setattr(collection.os.path, 'isfile', MagicMock(return_value=True))
with patch.object(collection.display, 'display') as mock_display:
with patch.object(collection.display, 'vvv') as mock_debug:
local_collection.verify(remote_collection, './', './')
namespace = local_collection.namespace
name = local_collection.name
assert mock_display.call_count == 3
assert mock_display.call_args_list[0][0][0] == 'Collection %s.%s contains modified content in the following files:' % (namespace, name)
assert mock_display.call_args_list[1][0][0] == '%s.%s' % (namespace, name)
assert mock_display.call_args_list[2][0][0] == ' README.md'
# The -vvv output should show details (the checksums do not match)
assert mock_debug.call_count == 5
assert mock_debug.call_args_list[-1][0][0] == ' Expected: individual_file_checksum\n Found: individual_file_checksum_modified'
@patch.object(builtins, 'open', mock_open())
def test_verify_identical(monkeypatch, mock_collection, manifest_info, files_manifest_info):
local_collection = mock_collection()
remote_collection = mock_collection(local=False)
monkeypatch.setattr(collection, '_get_tar_file_hash', MagicMock(side_effect=['manifest_checksum']))
monkeypatch.setattr(collection, '_consume_file', MagicMock(side_effect=['manifest_checksum', 'files_manifest_checksum', 'individual_file_checksum']))
monkeypatch.setattr(collection, '_get_json_from_tar_file', MagicMock(side_effect=[manifest_info, files_manifest_info]))
monkeypatch.setattr(collection.os.path, 'isfile', MagicMock(return_value=True))
with patch.object(collection.display, 'display') as mock_display:
with patch.object(collection.display, 'vvv') as mock_debug:
local_collection.verify(remote_collection, './', './')
# Successful verification is quiet
assert mock_display.call_count == 0
# The -vvv output should show the checksums not matching
namespace = local_collection.namespace
name = local_collection.name
version = local_collection.latest_version
success_msg = "Successfully verified that checksums for '%s.%s:%s' match the remote collection" % (namespace, name, version)
assert mock_debug.call_count == 4
assert mock_debug.call_args_list[-1][0][0] == success_msg
@patch.object(os.path, 'isdir', return_value=True)
def test_verify_collections_no_version(mock_isdir, mock_collection, monkeypatch):
namespace = 'ansible_namespace'
name = 'collection'
version = '*' # Occurs if MANIFEST.json does not exist
local_collection = mock_collection(namespace=namespace, name=name, version=version)
monkeypatch.setattr(collection.CollectionRequirement, 'from_path', MagicMock(return_value=local_collection))
collections = [('%s.%s' % (namespace, name), version, None)]
with pytest.raises(AnsibleError) as err:
collection.verify_collections(collections, './', local_collection.api, False, False)
err_msg = 'Collection %s.%s does not appear to have a MANIFEST.json. ' % (namespace, name)
err_msg += 'A MANIFEST.json is expected if the collection has been built and installed via ansible-galaxy.'
assert err.value.message == err_msg
@patch.object(collection.CollectionRequirement, 'verify')
def test_verify_collections_not_installed(mock_verify, mock_collection, monkeypatch):
namespace = 'ansible_namespace'
name = 'collection'
version = '1.0.0'
local_collection = mock_collection(local_installed=False)
found_remote = MagicMock(return_value=mock_collection(local=False))
monkeypatch.setattr(collection.CollectionRequirement, 'from_name', found_remote)
collections = [('%s.%s' % (namespace, name), version, None)]
search_path = './'
validate_certs = False
ignore_errors = False
apis = [local_collection.api]
with patch.object(collection, '_download_file') as mock_download_file:
with pytest.raises(AnsibleError) as err:
collection.verify_collections(collections, search_path, apis, validate_certs, ignore_errors)
assert err.value.message == "Collection %s.%s is not installed in any of the collection paths." % (namespace, name)
@patch.object(collection.CollectionRequirement, 'verify')
def test_verify_collections_not_installed_ignore_errors(mock_verify, mock_collection, monkeypatch):
namespace = 'ansible_namespace'
name = 'collection'
version = '1.0.0'
local_collection = mock_collection(local_installed=False)
found_remote = MagicMock(return_value=mock_collection(local=False))
monkeypatch.setattr(collection.CollectionRequirement, 'from_name', found_remote)
collections = [('%s.%s' % (namespace, name), version, None)]
search_path = './'
validate_certs = False
ignore_errors = True
apis = [local_collection.api]
with patch.object(collection, '_download_file') as mock_download_file:
with patch.object(Display, 'warning') as mock_warning:
collection.verify_collections(collections, search_path, apis, validate_certs, ignore_errors)
skip_message = "Failed to verify collection %s.%s but skipping due to --ignore-errors being set." % (namespace, name)
original_err = "Error: Collection %s.%s is not installed in any of the collection paths." % (namespace, name)
assert mock_warning.called
assert mock_warning.call_args[0][0] == skip_message + " " + original_err
@patch.object(os.path, 'isdir', return_value=True)
@patch.object(collection.CollectionRequirement, 'verify')
def test_verify_collections_no_remote(mock_verify, mock_isdir, mock_collection, monkeypatch):
namespace = 'ansible_namespace'
name = 'collection'
version = '1.0.0'
monkeypatch.setattr(os.path, 'isfile', MagicMock(side_effect=[False, True]))
monkeypatch.setattr(collection.CollectionRequirement, 'from_path', MagicMock(return_value=mock_collection()))
collections = [('%s.%s' % (namespace, name), version, None)]
search_path = './'
validate_certs = False
ignore_errors = False
apis = []
with pytest.raises(AnsibleError) as err:
collection.verify_collections(collections, search_path, apis, validate_certs, ignore_errors)
assert err.value.message == "Failed to find remote collection %s.%s:%s on any of the galaxy servers" % (namespace, name, version)
@patch.object(os.path, 'isdir', return_value=True)
@patch.object(collection.CollectionRequirement, 'verify')
def test_verify_collections_no_remote_ignore_errors(mock_verify, mock_isdir, mock_collection, monkeypatch):
namespace = 'ansible_namespace'
name = 'collection'
version = '1.0.0'
monkeypatch.setattr(os.path, 'isfile', MagicMock(side_effect=[False, True]))
monkeypatch.setattr(collection.CollectionRequirement, 'from_path', MagicMock(return_value=mock_collection()))
collections = [('%s.%s' % (namespace, name), version, None)]
search_path = './'
validate_certs = False
ignore_errors = True
apis = []
with patch.object(Display, 'warning') as mock_warning:
collection.verify_collections(collections, search_path, apis, validate_certs, ignore_errors)
skip_message = "Failed to verify collection %s.%s but skipping due to --ignore-errors being set." % (namespace, name)
original_err = "Error: Failed to find remote collection %s.%s:%s on any of the galaxy servers" % (namespace, name, version)
assert mock_warning.called
assert mock_warning.call_args[0][0] == skip_message + " " + original_err
def test_verify_collections_tarfile(monkeypatch):
monkeypatch.setattr(os.path, 'isfile', MagicMock(return_value=True))
invalid_format = 'ansible_namespace-collection-0.1.0.tar.gz'
collections = [(invalid_format, '*', None)]
with pytest.raises(AnsibleError) as err:
collection.verify_collections(collections, './', [], False, False)
msg = "'%s' is not a valid collection name. The format namespace.name is expected." % invalid_format
assert err.value.message == msg
def test_verify_collections_path(monkeypatch):
monkeypatch.setattr(os.path, 'isfile', MagicMock(return_value=False))
invalid_format = 'collections/collection_namespace/collection_name'
collections = [(invalid_format, '*', None)]
with pytest.raises(AnsibleError) as err:
collection.verify_collections(collections, './', [], False, False)
msg = "'%s' is not a valid collection name. The format namespace.name is expected." % invalid_format
assert err.value.message == msg
def test_verify_collections_url(monkeypatch):
monkeypatch.setattr(os.path, 'isfile', MagicMock(return_value=False))
invalid_format = 'https://galaxy.ansible.com/download/ansible_namespace-collection-0.1.0.tar.gz'
collections = [(invalid_format, '*', None)]
with pytest.raises(AnsibleError) as err:
collection.verify_collections(collections, './', [], False, False)
msg = "'%s' is not a valid collection name. The format namespace.name is expected." % invalid_format
assert err.value.message == msg
@patch.object(os.path, 'isdir', return_value=True)
@patch.object(collection.CollectionRequirement, 'verify')
def test_verify_collections_name(mock_verify, mock_isdir, mock_collection, monkeypatch):
local_collection = mock_collection()
monkeypatch.setattr(collection.CollectionRequirement, 'from_path', MagicMock(return_value=local_collection))
monkeypatch.setattr(os.path, 'isfile', MagicMock(side_effect=[False, True, False]))
located_remote_from_name = MagicMock(return_value=mock_collection(local=False))
monkeypatch.setattr(collection.CollectionRequirement, 'from_name', located_remote_from_name)
with patch.object(collection, '_download_file') as mock_download_file:
collections = [('%s.%s' % (local_collection.namespace, local_collection.name), '%s' % local_collection.latest_version, None)]
search_path = './'
validate_certs = False
ignore_errors = False
apis = [local_collection.api]
collection.verify_collections(collections, search_path, apis, validate_certs, ignore_errors)
assert mock_download_file.call_count == 1
assert located_remote_from_name.call_count == 1
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,943 |
Unicode breaks dict
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Ansible can't parse unicode correctly, when reading template
##### COMPONENT NAME
template
##### ANSIBLE VERSION
```
ansible 2.9.2
config file = None
configured module search path = [u'/home/home/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### OS / ENVIRONMENT
CentOS Linux release 7.7.1908 (Core)
##### STEPS TO REPRODUCE
vars.yml
```yaml
some_var: "test"
```
templates/example.json.j2
```json
{
"something": "test",
"this-breaks-key": "This breaks the whole file, because of: ü§",
"another-key": "{{ some_var }}"
}
```
```yaml
- hosts: test
connection: local
vars:
reference: "{{ lookup('template', 'templates/example.json.j2') }}"
delta: {}
tasks:
- name: Test | Get configuration changes
set_fact:
delta: "{{ delta | combine({item.key: item.value}, recursive=true) }}"
loop: "{{ reference | dict2items }}"
```
##### EXPECTED RESULTS
The valid json file should be correctly parsed and unicode should not be a problem in 2020.
Everything works just as expected, when `unicode chars` are removed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
```json
{"msg": "dict2items requires a dictionary, got <type 'unicode'> instead."}
```
The underlying exception that causes this behavior:
```
Traceback (most recent call last):
File "/Users/matt/projects/ansibledev/ansible/lib/ansible/template/safe_eval.py", line 142, in safe_eval
compiled = compile(parsed_tree, expr, 'eval')
UnicodeEncodeError: 'ascii' codec can't encode characters in position 87-88: ordinal not in range(128)
```
|
https://github.com/ansible/ansible/issues/66943
|
https://github.com/ansible/ansible/pull/68576
|
889da811d7fdc4c0fdab6ff573f7bc66b60b753c
|
ecd986006ededd3ecfd4fb6704d7a68b3bfba5e1
| 2020-01-30T13:04:45Z |
python
| 2020-05-20T16:08:50Z |
changelogs/fragments/66943-handle-unicode-in-safe_eval.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,943 |
Unicode breaks dict
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Ansible can't parse unicode correctly, when reading template
##### COMPONENT NAME
template
##### ANSIBLE VERSION
```
ansible 2.9.2
config file = None
configured module search path = [u'/home/home/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### OS / ENVIRONMENT
CentOS Linux release 7.7.1908 (Core)
##### STEPS TO REPRODUCE
vars.yml
```yaml
some_var: "test"
```
templates/example.json.j2
```json
{
"something": "test",
"this-breaks-key": "This breaks the whole file, because of: ü§",
"another-key": "{{ some_var }}"
}
```
```yaml
- hosts: test
connection: local
vars:
reference: "{{ lookup('template', 'templates/example.json.j2') }}"
delta: {}
tasks:
- name: Test | Get configuration changes
set_fact:
delta: "{{ delta | combine({item.key: item.value}, recursive=true) }}"
loop: "{{ reference | dict2items }}"
```
##### EXPECTED RESULTS
The valid json file should be correctly parsed and unicode should not be a problem in 2020.
Everything works just as expected, when `unicode chars` are removed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
```json
{"msg": "dict2items requires a dictionary, got <type 'unicode'> instead."}
```
The underlying exception that causes this behavior:
```
Traceback (most recent call last):
File "/Users/matt/projects/ansibledev/ansible/lib/ansible/template/safe_eval.py", line 142, in safe_eval
compiled = compile(parsed_tree, expr, 'eval')
UnicodeEncodeError: 'ascii' codec can't encode characters in position 87-88: ordinal not in range(128)
```
|
https://github.com/ansible/ansible/issues/66943
|
https://github.com/ansible/ansible/pull/68576
|
889da811d7fdc4c0fdab6ff573f7bc66b60b753c
|
ecd986006ededd3ecfd4fb6704d7a68b3bfba5e1
| 2020-01-30T13:04:45Z |
python
| 2020-05-20T16:08:50Z |
lib/ansible/module_utils/common/text/converters.py
|
# -*- coding: utf-8 -*-
# Copyright (c) 2019 Ansible Project
# (c) 2016 Toshio Kuratomi <[email protected]>
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import codecs
import datetime
import json
from ansible.module_utils.common._collections_compat import Set
from ansible.module_utils.six import (
PY3,
binary_type,
iteritems,
text_type,
)
try:
codecs.lookup_error('surrogateescape')
HAS_SURROGATEESCAPE = True
except LookupError:
HAS_SURROGATEESCAPE = False
_COMPOSED_ERROR_HANDLERS = frozenset((None, 'surrogate_or_replace',
'surrogate_or_strict',
'surrogate_then_replace'))
def to_bytes(obj, encoding='utf-8', errors=None, nonstring='simplerepr'):
"""Make sure that a string is a byte string
:arg obj: An object to make sure is a byte string. In most cases this
will be either a text string or a byte string. However, with
``nonstring='simplerepr'``, this can be used as a traceback-free
version of ``str(obj)``.
:kwarg encoding: The encoding to use to transform from a text string to
a byte string. Defaults to using 'utf-8'.
:kwarg errors: The error handler to use if the text string is not
encodable using the specified encoding. Any valid `codecs error
handler <https://docs.python.org/2/library/codecs.html#codec-base-classes>`_
may be specified. There are three additional error strategies
specifically aimed at helping people to port code. The first two are:
:surrogate_or_strict: Will use ``surrogateescape`` if it is a valid
handler, otherwise it will use ``strict``
:surrogate_or_replace: Will use ``surrogateescape`` if it is a valid
handler, otherwise it will use ``replace``.
Because ``surrogateescape`` was added in Python3 this usually means that
Python3 will use ``surrogateescape`` and Python2 will use the fallback
error handler. Note that the code checks for ``surrogateescape`` when the
module is imported. If you have a backport of ``surrogateescape`` for
Python2, be sure to register the error handler prior to importing this
module.
The last error handler is:
:surrogate_then_replace: Will use ``surrogateescape`` if it is a valid
handler. If encoding with ``surrogateescape`` would traceback,
surrogates are first replaced with a replacement characters
and then the string is encoded using ``replace`` (which replaces
the rest of the nonencodable bytes). If ``surrogateescape`` is
not present it will simply use ``replace``. (Added in Ansible 2.3)
This strategy is designed to never traceback when it attempts
to encode a string.
The default until Ansible-2.2 was ``surrogate_or_replace``
From Ansible-2.3 onwards, the default is ``surrogate_then_replace``.
:kwarg nonstring: The strategy to use if a nonstring is specified in
``obj``. Default is 'simplerepr'. Valid values are:
:simplerepr: The default. This takes the ``str`` of the object and
then returns the bytes version of that string.
:empty: Return an empty byte string
:passthru: Return the object passed in
:strict: Raise a :exc:`TypeError`
:returns: Typically this returns a byte string. If a nonstring object is
passed in this may be a different type depending on the strategy
specified by nonstring. This will never return a text string.
.. note:: If passed a byte string, this function does not check that the
string is valid in the specified encoding. If it's important that the
byte string is in the specified encoding do::
encoded_string = to_bytes(to_text(input_string, 'latin-1'), 'utf-8')
.. version_changed:: 2.3
Added the ``surrogate_then_replace`` error handler and made it the default error handler.
"""
if isinstance(obj, binary_type):
return obj
# We're given a text string
# If it has surrogates, we know because it will decode
original_errors = errors
if errors in _COMPOSED_ERROR_HANDLERS:
if HAS_SURROGATEESCAPE:
errors = 'surrogateescape'
elif errors == 'surrogate_or_strict':
errors = 'strict'
else:
errors = 'replace'
if isinstance(obj, text_type):
try:
# Try this first as it's the fastest
return obj.encode(encoding, errors)
except UnicodeEncodeError:
if original_errors in (None, 'surrogate_then_replace'):
# We should only reach this if encoding was non-utf8 original_errors was
# surrogate_then_escape and errors was surrogateescape
# Slow but works
return_string = obj.encode('utf-8', 'surrogateescape')
return_string = return_string.decode('utf-8', 'replace')
return return_string.encode(encoding, 'replace')
raise
# Note: We do these last even though we have to call to_bytes again on the
# value because we're optimizing the common case
if nonstring == 'simplerepr':
try:
value = str(obj)
except UnicodeError:
try:
value = repr(obj)
except UnicodeError:
# Giving up
return to_bytes('')
elif nonstring == 'passthru':
return obj
elif nonstring == 'empty':
# python2.4 doesn't have b''
return to_bytes('')
elif nonstring == 'strict':
raise TypeError('obj must be a string type')
else:
raise TypeError('Invalid value %s for to_bytes\' nonstring parameter' % nonstring)
return to_bytes(value, encoding, errors)
def to_text(obj, encoding='utf-8', errors=None, nonstring='simplerepr'):
"""Make sure that a string is a text string
:arg obj: An object to make sure is a text string. In most cases this
will be either a text string or a byte string. However, with
``nonstring='simplerepr'``, this can be used as a traceback-free
version of ``str(obj)``.
:kwarg encoding: The encoding to use to transform from a byte string to
a text string. Defaults to using 'utf-8'.
:kwarg errors: The error handler to use if the byte string is not
decodable using the specified encoding. Any valid `codecs error
handler <https://docs.python.org/2/library/codecs.html#codec-base-classes>`_
may be specified. We support three additional error strategies
specifically aimed at helping people to port code:
:surrogate_or_strict: Will use surrogateescape if it is a valid
handler, otherwise it will use strict
:surrogate_or_replace: Will use surrogateescape if it is a valid
handler, otherwise it will use replace.
:surrogate_then_replace: Does the same as surrogate_or_replace but
`was added for symmetry with the error handlers in
:func:`ansible.module_utils._text.to_bytes` (Added in Ansible 2.3)
Because surrogateescape was added in Python3 this usually means that
Python3 will use `surrogateescape` and Python2 will use the fallback
error handler. Note that the code checks for surrogateescape when the
module is imported. If you have a backport of `surrogateescape` for
python2, be sure to register the error handler prior to importing this
module.
The default until Ansible-2.2 was `surrogate_or_replace`
In Ansible-2.3 this defaults to `surrogate_then_replace` for symmetry
with :func:`ansible.module_utils._text.to_bytes` .
:kwarg nonstring: The strategy to use if a nonstring is specified in
``obj``. Default is 'simplerepr'. Valid values are:
:simplerepr: The default. This takes the ``str`` of the object and
then returns the text version of that string.
:empty: Return an empty text string
:passthru: Return the object passed in
:strict: Raise a :exc:`TypeError`
:returns: Typically this returns a text string. If a nonstring object is
passed in this may be a different type depending on the strategy
specified by nonstring. This will never return a byte string.
From Ansible-2.3 onwards, the default is `surrogate_then_replace`.
.. version_changed:: 2.3
Added the surrogate_then_replace error handler and made it the default error handler.
"""
if isinstance(obj, text_type):
return obj
if errors in _COMPOSED_ERROR_HANDLERS:
if HAS_SURROGATEESCAPE:
errors = 'surrogateescape'
elif errors == 'surrogate_or_strict':
errors = 'strict'
else:
errors = 'replace'
if isinstance(obj, binary_type):
# Note: We don't need special handling for surrogate_then_replace
# because all bytes will either be made into surrogates or are valid
# to decode.
return obj.decode(encoding, errors)
# Note: We do these last even though we have to call to_text again on the
# value because we're optimizing the common case
if nonstring == 'simplerepr':
try:
value = str(obj)
except UnicodeError:
try:
value = repr(obj)
except UnicodeError:
# Giving up
return u''
elif nonstring == 'passthru':
return obj
elif nonstring == 'empty':
return u''
elif nonstring == 'strict':
raise TypeError('obj must be a string type')
else:
raise TypeError('Invalid value %s for to_text\'s nonstring parameter' % nonstring)
return to_text(value, encoding, errors)
#: :py:func:`to_native`
#: Transform a variable into the native str type for the python version
#:
#: On Python2, this is an alias for
#: :func:`~ansible.module_utils.to_bytes`. On Python3 it is an alias for
#: :func:`~ansible.module_utils.to_text`. It makes it easier to
#: transform a variable into the native str type for the python version
#: the code is running on. Use this when constructing the message to
#: send to exceptions or when dealing with an API that needs to take
#: a native string. Example::
#:
#: try:
#: 1//0
#: except ZeroDivisionError as e:
#: raise MyException('Encountered and error: %s' % to_native(e))
if PY3:
to_native = to_text
else:
to_native = to_bytes
def _json_encode_fallback(obj):
if isinstance(obj, Set):
return list(obj)
elif isinstance(obj, datetime.datetime):
return obj.isoformat()
raise TypeError("Cannot json serialize %s" % to_native(obj))
def jsonify(data, **kwargs):
for encoding in ("utf-8", "latin-1"):
try:
return json.dumps(data, encoding=encoding, default=_json_encode_fallback, **kwargs)
# Old systems using old simplejson module does not support encoding keyword.
except TypeError:
try:
new_data = container_to_text(data, encoding=encoding)
except UnicodeDecodeError:
continue
return json.dumps(new_data, default=_json_encode_fallback, **kwargs)
except UnicodeDecodeError:
continue
raise UnicodeError('Invalid unicode encoding encountered')
def container_to_bytes(d, encoding='utf-8', errors='surrogate_or_strict'):
''' Recursively convert dict keys and values to byte str
Specialized for json return because this only handles, lists, tuples,
and dict container types (the containers that the json module returns)
'''
if isinstance(d, text_type):
return to_bytes(d, encoding=encoding, errors=errors)
elif isinstance(d, dict):
return dict(container_to_bytes(o, encoding, errors) for o in iteritems(d))
elif isinstance(d, list):
return [container_to_bytes(o, encoding, errors) for o in d]
elif isinstance(d, tuple):
return tuple(container_to_bytes(o, encoding, errors) for o in d)
else:
return d
def container_to_text(d, encoding='utf-8', errors='surrogate_or_strict'):
"""Recursively convert dict keys and values to byte str
Specialized for json return because this only handles, lists, tuples,
and dict container types (the containers that the json module returns)
"""
if isinstance(d, binary_type):
# Warning, can traceback
return to_text(d, encoding=encoding, errors=errors)
elif isinstance(d, dict):
return dict(container_to_text(o, encoding, errors) for o in iteritems(d))
elif isinstance(d, list):
return [container_to_text(o, encoding, errors) for o in d]
elif isinstance(d, tuple):
return tuple(container_to_text(o, encoding, errors) for o in d)
else:
return d
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,943 |
Unicode breaks dict
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Ansible can't parse unicode correctly, when reading template
##### COMPONENT NAME
template
##### ANSIBLE VERSION
```
ansible 2.9.2
config file = None
configured module search path = [u'/home/home/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### OS / ENVIRONMENT
CentOS Linux release 7.7.1908 (Core)
##### STEPS TO REPRODUCE
vars.yml
```yaml
some_var: "test"
```
templates/example.json.j2
```json
{
"something": "test",
"this-breaks-key": "This breaks the whole file, because of: ü§",
"another-key": "{{ some_var }}"
}
```
```yaml
- hosts: test
connection: local
vars:
reference: "{{ lookup('template', 'templates/example.json.j2') }}"
delta: {}
tasks:
- name: Test | Get configuration changes
set_fact:
delta: "{{ delta | combine({item.key: item.value}, recursive=true) }}"
loop: "{{ reference | dict2items }}"
```
##### EXPECTED RESULTS
The valid json file should be correctly parsed and unicode should not be a problem in 2020.
Everything works just as expected, when `unicode chars` are removed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
```json
{"msg": "dict2items requires a dictionary, got <type 'unicode'> instead."}
```
The underlying exception that causes this behavior:
```
Traceback (most recent call last):
File "/Users/matt/projects/ansibledev/ansible/lib/ansible/template/safe_eval.py", line 142, in safe_eval
compiled = compile(parsed_tree, expr, 'eval')
UnicodeEncodeError: 'ascii' codec can't encode characters in position 87-88: ordinal not in range(128)
```
|
https://github.com/ansible/ansible/issues/66943
|
https://github.com/ansible/ansible/pull/68576
|
889da811d7fdc4c0fdab6ff573f7bc66b60b753c
|
ecd986006ededd3ecfd4fb6704d7a68b3bfba5e1
| 2020-01-30T13:04:45Z |
python
| 2020-05-20T16:08:50Z |
lib/ansible/template/safe_eval.py
|
# (c) 2012, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ast
import sys
from ansible import constants as C
from ansible.module_utils.six import string_types
from ansible.module_utils.six.moves import builtins
from ansible.plugins.loader import filter_loader, test_loader
def safe_eval(expr, locals=None, include_exceptions=False):
'''
This is intended for allowing things like:
with_items: a_list_variable
Where Jinja2 would return a string but we do not want to allow it to
call functions (outside of Jinja2, where the env is constrained).
Based on:
http://stackoverflow.com/questions/12523516/using-ast-and-whitelists-to-make-pythons-eval-safe
'''
locals = {} if locals is None else locals
# define certain JSON types
# eg. JSON booleans are unknown to python eval()
OUR_GLOBALS = {
'__builtins__': {}, # avoid global builtins as per eval docs
'false': False,
'null': None,
'true': True,
# also add back some builtins we do need
'True': True,
'False': False,
'None': None
}
# this is the whitelist of AST nodes we are going to
# allow in the evaluation. Any node type other than
# those listed here will raise an exception in our custom
# visitor class defined below.
SAFE_NODES = set(
(
ast.Add,
ast.BinOp,
# ast.Call,
ast.Compare,
ast.Dict,
ast.Div,
ast.Expression,
ast.List,
ast.Load,
ast.Mult,
ast.Num,
ast.Name,
ast.Str,
ast.Sub,
ast.USub,
ast.Tuple,
ast.UnaryOp,
)
)
# AST node types were expanded after 2.6
if sys.version_info[:2] >= (2, 7):
SAFE_NODES.update(
set(
(ast.Set,)
)
)
# And in Python 3.4 too
if sys.version_info[:2] >= (3, 4):
SAFE_NODES.update(
set(
(ast.NameConstant,)
)
)
# And in Python 3.6 too, although not encountered until Python 3.8, see https://bugs.python.org/issue32892
if sys.version_info[:2] >= (3, 6):
SAFE_NODES.update(
set(
(ast.Constant,)
)
)
filter_list = []
for filter_ in filter_loader.all():
filter_list.extend(filter_.filters().keys())
test_list = []
for test in test_loader.all():
test_list.extend(test.tests().keys())
CALL_WHITELIST = C.DEFAULT_CALLABLE_WHITELIST + filter_list + test_list
class CleansingNodeVisitor(ast.NodeVisitor):
def generic_visit(self, node, inside_call=False):
if type(node) not in SAFE_NODES:
raise Exception("invalid expression (%s)" % expr)
elif isinstance(node, ast.Call):
inside_call = True
elif isinstance(node, ast.Name) and inside_call:
# Disallow calls to builtin functions that we have not vetted
# as safe. Other functions are excluded by setting locals in
# the call to eval() later on
if hasattr(builtins, node.id) and node.id not in CALL_WHITELIST:
raise Exception("invalid function: %s" % node.id)
# iterate over all child nodes
for child_node in ast.iter_child_nodes(node):
self.generic_visit(child_node, inside_call)
if not isinstance(expr, string_types):
# already templated to a datastructure, perhaps?
if include_exceptions:
return (expr, None)
return expr
cnv = CleansingNodeVisitor()
try:
parsed_tree = ast.parse(expr, mode='eval')
cnv.visit(parsed_tree)
compiled = compile(parsed_tree, expr, 'eval')
# Note: passing our own globals and locals here constrains what
# callables (and other identifiers) are recognized. this is in
# addition to the filtering of builtins done in CleansingNodeVisitor
result = eval(compiled, OUR_GLOBALS, dict(locals))
if include_exceptions:
return (result, None)
else:
return result
except SyntaxError as e:
# special handling for syntax errors, we just return
# the expression string back as-is to support late evaluation
if include_exceptions:
return (expr, None)
return expr
except Exception as e:
if include_exceptions:
return (expr, e)
return expr
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,943 |
Unicode breaks dict
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Ansible can't parse unicode correctly, when reading template
##### COMPONENT NAME
template
##### ANSIBLE VERSION
```
ansible 2.9.2
config file = None
configured module search path = [u'/home/home/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### OS / ENVIRONMENT
CentOS Linux release 7.7.1908 (Core)
##### STEPS TO REPRODUCE
vars.yml
```yaml
some_var: "test"
```
templates/example.json.j2
```json
{
"something": "test",
"this-breaks-key": "This breaks the whole file, because of: ü§",
"another-key": "{{ some_var }}"
}
```
```yaml
- hosts: test
connection: local
vars:
reference: "{{ lookup('template', 'templates/example.json.j2') }}"
delta: {}
tasks:
- name: Test | Get configuration changes
set_fact:
delta: "{{ delta | combine({item.key: item.value}, recursive=true) }}"
loop: "{{ reference | dict2items }}"
```
##### EXPECTED RESULTS
The valid json file should be correctly parsed and unicode should not be a problem in 2020.
Everything works just as expected, when `unicode chars` are removed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
```json
{"msg": "dict2items requires a dictionary, got <type 'unicode'> instead."}
```
The underlying exception that causes this behavior:
```
Traceback (most recent call last):
File "/Users/matt/projects/ansibledev/ansible/lib/ansible/template/safe_eval.py", line 142, in safe_eval
compiled = compile(parsed_tree, expr, 'eval')
UnicodeEncodeError: 'ascii' codec can't encode characters in position 87-88: ordinal not in range(128)
```
|
https://github.com/ansible/ansible/issues/66943
|
https://github.com/ansible/ansible/pull/68576
|
889da811d7fdc4c0fdab6ff573f7bc66b60b753c
|
ecd986006ededd3ecfd4fb6704d7a68b3bfba5e1
| 2020-01-30T13:04:45Z |
python
| 2020-05-20T16:08:50Z |
test/integration/targets/templating_lookups/runme.sh
|
#!/usr/bin/env bash
set -eux
ANSIBLE_ROLES_PATH=../ UNICODE_VAR=café ansible-playbook runme.yml "$@"
ansible-playbook template_lookup_vaulted/playbook.yml --vault-password-file template_lookup_vaulted/test_vault_pass "$@"
ansible-playbook template_deepcopy/playbook.yml -i template_deepcopy/hosts "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,943 |
Unicode breaks dict
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Ansible can't parse unicode correctly, when reading template
##### COMPONENT NAME
template
##### ANSIBLE VERSION
```
ansible 2.9.2
config file = None
configured module search path = [u'/home/home/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### OS / ENVIRONMENT
CentOS Linux release 7.7.1908 (Core)
##### STEPS TO REPRODUCE
vars.yml
```yaml
some_var: "test"
```
templates/example.json.j2
```json
{
"something": "test",
"this-breaks-key": "This breaks the whole file, because of: ü§",
"another-key": "{{ some_var }}"
}
```
```yaml
- hosts: test
connection: local
vars:
reference: "{{ lookup('template', 'templates/example.json.j2') }}"
delta: {}
tasks:
- name: Test | Get configuration changes
set_fact:
delta: "{{ delta | combine({item.key: item.value}, recursive=true) }}"
loop: "{{ reference | dict2items }}"
```
##### EXPECTED RESULTS
The valid json file should be correctly parsed and unicode should not be a problem in 2020.
Everything works just as expected, when `unicode chars` are removed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
```json
{"msg": "dict2items requires a dictionary, got <type 'unicode'> instead."}
```
The underlying exception that causes this behavior:
```
Traceback (most recent call last):
File "/Users/matt/projects/ansibledev/ansible/lib/ansible/template/safe_eval.py", line 142, in safe_eval
compiled = compile(parsed_tree, expr, 'eval')
UnicodeEncodeError: 'ascii' codec can't encode characters in position 87-88: ordinal not in range(128)
```
|
https://github.com/ansible/ansible/issues/66943
|
https://github.com/ansible/ansible/pull/68576
|
889da811d7fdc4c0fdab6ff573f7bc66b60b753c
|
ecd986006ededd3ecfd4fb6704d7a68b3bfba5e1
| 2020-01-30T13:04:45Z |
python
| 2020-05-20T16:08:50Z |
test/integration/targets/templating_lookups/template_lookup_safe_eval_unicode/playbook.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,943 |
Unicode breaks dict
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Ansible can't parse unicode correctly, when reading template
##### COMPONENT NAME
template
##### ANSIBLE VERSION
```
ansible 2.9.2
config file = None
configured module search path = [u'/home/home/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### OS / ENVIRONMENT
CentOS Linux release 7.7.1908 (Core)
##### STEPS TO REPRODUCE
vars.yml
```yaml
some_var: "test"
```
templates/example.json.j2
```json
{
"something": "test",
"this-breaks-key": "This breaks the whole file, because of: ü§",
"another-key": "{{ some_var }}"
}
```
```yaml
- hosts: test
connection: local
vars:
reference: "{{ lookup('template', 'templates/example.json.j2') }}"
delta: {}
tasks:
- name: Test | Get configuration changes
set_fact:
delta: "{{ delta | combine({item.key: item.value}, recursive=true) }}"
loop: "{{ reference | dict2items }}"
```
##### EXPECTED RESULTS
The valid json file should be correctly parsed and unicode should not be a problem in 2020.
Everything works just as expected, when `unicode chars` are removed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
```json
{"msg": "dict2items requires a dictionary, got <type 'unicode'> instead."}
```
The underlying exception that causes this behavior:
```
Traceback (most recent call last):
File "/Users/matt/projects/ansibledev/ansible/lib/ansible/template/safe_eval.py", line 142, in safe_eval
compiled = compile(parsed_tree, expr, 'eval')
UnicodeEncodeError: 'ascii' codec can't encode characters in position 87-88: ordinal not in range(128)
```
|
https://github.com/ansible/ansible/issues/66943
|
https://github.com/ansible/ansible/pull/68576
|
889da811d7fdc4c0fdab6ff573f7bc66b60b753c
|
ecd986006ededd3ecfd4fb6704d7a68b3bfba5e1
| 2020-01-30T13:04:45Z |
python
| 2020-05-20T16:08:50Z |
test/integration/targets/templating_lookups/template_lookup_safe_eval_unicode/template.json.j2
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,532 |
gather_facts does not combine results of multiple fact modules
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When executing the gather_facts module with a list of fact gathering modules the results aren't combined. Instead, the result from one fact gathering module replaces all others.
This is caused by gather_facts calling `combine_vars()`
https://github.com/ansible/ansible/blob/0f5a63f1b99e586f86932907c49b1e8877128957/lib/ansible/plugins/action/gather_facts.py#L49-L55
which defaults to doing a replacement instead of a merge:
https://github.com/ansible/ansible/blob/0f5a63f1b99e586f86932907c49b1e8877128957/lib/ansible/utils/vars.py#L80-L92
AFAICS, in the context of this gather_facts module the replacement strategy doesn't make any sense.
Thus I suggest to replace the `combine_vars()` calls in `gather_facts.py` with `merge_hash()`.
See also @bcoca's gather_facts pull request #49399 which introduced the possibility to configure multiple fact gathering modules and execute them in parallel.
Sure, one can kind of work around this by also setting `ANSIBLE_HASH_BEHAVIOUR=merge` but then you get the merge behavior everywhere, i.e. also when overwriting variables during playbook execution etc. So that isn't a real work around, at all.
Again, I can't see a use case where the default replace behavior would be useful when using `gather_facts` since _the_ goal of configuring multiple facts modules is to have their results combined.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
gather_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /home/gms/program/mailserver/ansible.cfg
configured module search path = ['/home/gms/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
- Fedora 31
- ansible-2.9.6-1.fc31.noarch
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
ANSIBLE_FACTS_MODULES=setup,package_facts ansible all -i myhost, -m gather_facts --tree facts >/dev/null 2>&1
grep '"ansible_distribution":\|"packages":' -o facts -r
ANSIBLE_FACTS_MODULES=package_facts,setup ansible all -i myhost, -m gather_facts --tree facts >/dev/null 2>&1
grep '"ansible_distribution":\|"packages":' -o facts -r
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect that the created json facts file contains the facts both from the `setup` and `package_facts` facts gathering modules, i.e. the above two greps should yield output similar to:
```
facts/myhost:"ansible_distribution":
facts/myhost:"packages":
facts/myhost:"ansible_distribution":
facts/myhost:"packages":
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Note how I only get the results from the `package_facts` fact gathering module - even when I reverse the list of fact gathering modules. The results from the `setup` gathering module are missing.
<!--- Paste verbatim command output between quotes -->
```paste below
facts/myhost:"packages":
facts/myhost:"packages":
```
AFAICS, the result that the order of the FACTS_MODULES doesn't make a difference is caused by the fact that they are executed in parallel, by default.
|
https://github.com/ansible/ansible/issues/68532
|
https://github.com/ansible/ansible/pull/68987
|
fe941a4045861bfe87340381e7992bcecdbc0291
|
9281148b623d4e2e8302778d91af3e84ab9579a9
| 2020-03-28T19:58:57Z |
python
| 2020-05-20T22:53:37Z |
changelogs/fragments/gf_fix.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,532 |
gather_facts does not combine results of multiple fact modules
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When executing the gather_facts module with a list of fact gathering modules the results aren't combined. Instead, the result from one fact gathering module replaces all others.
This is caused by gather_facts calling `combine_vars()`
https://github.com/ansible/ansible/blob/0f5a63f1b99e586f86932907c49b1e8877128957/lib/ansible/plugins/action/gather_facts.py#L49-L55
which defaults to doing a replacement instead of a merge:
https://github.com/ansible/ansible/blob/0f5a63f1b99e586f86932907c49b1e8877128957/lib/ansible/utils/vars.py#L80-L92
AFAICS, in the context of this gather_facts module the replacement strategy doesn't make any sense.
Thus I suggest to replace the `combine_vars()` calls in `gather_facts.py` with `merge_hash()`.
See also @bcoca's gather_facts pull request #49399 which introduced the possibility to configure multiple fact gathering modules and execute them in parallel.
Sure, one can kind of work around this by also setting `ANSIBLE_HASH_BEHAVIOUR=merge` but then you get the merge behavior everywhere, i.e. also when overwriting variables during playbook execution etc. So that isn't a real work around, at all.
Again, I can't see a use case where the default replace behavior would be useful when using `gather_facts` since _the_ goal of configuring multiple facts modules is to have their results combined.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
gather_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /home/gms/program/mailserver/ansible.cfg
configured module search path = ['/home/gms/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
- Fedora 31
- ansible-2.9.6-1.fc31.noarch
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
ANSIBLE_FACTS_MODULES=setup,package_facts ansible all -i myhost, -m gather_facts --tree facts >/dev/null 2>&1
grep '"ansible_distribution":\|"packages":' -o facts -r
ANSIBLE_FACTS_MODULES=package_facts,setup ansible all -i myhost, -m gather_facts --tree facts >/dev/null 2>&1
grep '"ansible_distribution":\|"packages":' -o facts -r
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect that the created json facts file contains the facts both from the `setup` and `package_facts` facts gathering modules, i.e. the above two greps should yield output similar to:
```
facts/myhost:"ansible_distribution":
facts/myhost:"packages":
facts/myhost:"ansible_distribution":
facts/myhost:"packages":
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Note how I only get the results from the `package_facts` fact gathering module - even when I reverse the list of fact gathering modules. The results from the `setup` gathering module are missing.
<!--- Paste verbatim command output between quotes -->
```paste below
facts/myhost:"packages":
facts/myhost:"packages":
```
AFAICS, the result that the order of the FACTS_MODULES doesn't make a difference is caused by the fact that they are executed in parallel, by default.
|
https://github.com/ansible/ansible/issues/68532
|
https://github.com/ansible/ansible/pull/68987
|
fe941a4045861bfe87340381e7992bcecdbc0291
|
9281148b623d4e2e8302778d91af3e84ab9579a9
| 2020-03-28T19:58:57Z |
python
| 2020-05-20T22:53:37Z |
lib/ansible/plugins/action/gather_facts.py
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import time
from ansible import constants as C
from ansible.executor.module_common import get_action_args_with_defaults
from ansible.plugins.action import ActionBase
from ansible.utils.vars import combine_vars
class ActionModule(ActionBase):
def _get_module_args(self, fact_module, task_vars):
mod_args = self._task.args.copy()
# deal with 'setup specific arguments'
if fact_module != 'setup':
# network facts modules must support gather_subset
if self._connection._load_name not in ('network_cli', 'httpapi', 'netconf'):
subset = mod_args.pop('gather_subset', None)
if subset not in ('all', ['all']):
self._display.warning('Ignoring subset(%s) for %s' % (subset, fact_module))
timeout = mod_args.pop('gather_timeout', None)
if timeout is not None:
self._display.warning('Ignoring timeout(%s) for %s' % (timeout, fact_module))
fact_filter = mod_args.pop('filter', None)
if fact_filter is not None:
self._display.warning('Ignoring filter(%s) for %s' % (fact_filter, fact_module))
# Strip out keys with ``None`` values, effectively mimicking ``omit`` behavior
# This ensures we don't pass a ``None`` value as an argument expecting a specific type
mod_args = dict((k, v) for k, v in mod_args.items() if v is not None)
# handle module defaults
mod_args = get_action_args_with_defaults(fact_module, mod_args, self._task.module_defaults, self._templar)
return mod_args
def _combine_task_result(self, result, task_result):
filtered_res = {
'ansible_facts': task_result.get('ansible_facts', {}),
'warnings': task_result.get('warnings', []),
'deprecations': task_result.get('deprecations', []),
}
return combine_vars(result, filtered_res)
def run(self, tmp=None, task_vars=None):
self._supports_check_mode = True
result = super(ActionModule, self).run(tmp, task_vars)
result['ansible_facts'] = {}
modules = C.config.get_config_value('FACTS_MODULES', variables=task_vars)
parallel = task_vars.pop('ansible_facts_parallel', self._task.args.pop('parallel', None))
if 'smart' in modules:
connection_map = C.config.get_config_value('CONNECTION_FACTS_MODULES', variables=task_vars)
network_os = self._task.args.get('network_os', task_vars.get('ansible_network_os', task_vars.get('ansible_facts', {}).get('network_os')))
modules.extend([connection_map.get(network_os or self._connection._load_name, 'setup')])
modules.pop(modules.index('smart'))
failed = {}
skipped = {}
if parallel is False or (len(modules) == 1 and parallel is None):
# serially execute each module
for fact_module in modules:
# just one module, no need for fancy async
mod_args = self._get_module_args(fact_module, task_vars)
res = self._execute_module(module_name=fact_module, module_args=mod_args, task_vars=task_vars, wrap_async=False)
if res.get('failed', False):
failed[fact_module] = res
elif res.get('skipped', False):
skipped[fact_module] = res
else:
result = self._combine_task_result(result, res)
self._remove_tmp_path(self._connection._shell.tmpdir)
else:
# do it async
jobs = {}
for fact_module in modules:
mod_args = self._get_module_args(fact_module, task_vars)
self._display.vvvv("Running %s" % fact_module)
jobs[fact_module] = (self._execute_module(module_name=fact_module, module_args=mod_args, task_vars=task_vars, wrap_async=True))
while jobs:
for module in jobs:
poll_args = {'jid': jobs[module]['ansible_job_id'], '_async_dir': os.path.dirname(jobs[module]['results_file'])}
res = self._execute_module(module_name='async_status', module_args=poll_args, task_vars=task_vars, wrap_async=False)
if res.get('finished', 0) == 1:
if res.get('failed', False):
failed[module] = res
elif res.get('skipped', False):
skipped[module] = res
else:
result = self._combine_task_result(result, res)
del jobs[module]
break
else:
time.sleep(0.1)
else:
time.sleep(0.5)
if skipped:
result['msg'] = "The following modules were skipped: %s\n" % (', '.join(skipped.keys()))
result['skipped_modules'] = skipped
if len(skipped) == len(modules):
result['skipped'] = True
if failed:
result['failed'] = True
result['msg'] = "The following modules failed to execute: %s\n" % (', '.join(failed.keys()))
result['failed_modules'] = failed
# tell executor facts were gathered
result['ansible_facts']['_ansible_facts_gathered'] = True
# hack to keep --verbose from showing all the setup module result
result['_ansible_verbose_override'] = True
return result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,532 |
gather_facts does not combine results of multiple fact modules
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When executing the gather_facts module with a list of fact gathering modules the results aren't combined. Instead, the result from one fact gathering module replaces all others.
This is caused by gather_facts calling `combine_vars()`
https://github.com/ansible/ansible/blob/0f5a63f1b99e586f86932907c49b1e8877128957/lib/ansible/plugins/action/gather_facts.py#L49-L55
which defaults to doing a replacement instead of a merge:
https://github.com/ansible/ansible/blob/0f5a63f1b99e586f86932907c49b1e8877128957/lib/ansible/utils/vars.py#L80-L92
AFAICS, in the context of this gather_facts module the replacement strategy doesn't make any sense.
Thus I suggest to replace the `combine_vars()` calls in `gather_facts.py` with `merge_hash()`.
See also @bcoca's gather_facts pull request #49399 which introduced the possibility to configure multiple fact gathering modules and execute them in parallel.
Sure, one can kind of work around this by also setting `ANSIBLE_HASH_BEHAVIOUR=merge` but then you get the merge behavior everywhere, i.e. also when overwriting variables during playbook execution etc. So that isn't a real work around, at all.
Again, I can't see a use case where the default replace behavior would be useful when using `gather_facts` since _the_ goal of configuring multiple facts modules is to have their results combined.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
gather_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /home/gms/program/mailserver/ansible.cfg
configured module search path = ['/home/gms/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
- Fedora 31
- ansible-2.9.6-1.fc31.noarch
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
ANSIBLE_FACTS_MODULES=setup,package_facts ansible all -i myhost, -m gather_facts --tree facts >/dev/null 2>&1
grep '"ansible_distribution":\|"packages":' -o facts -r
ANSIBLE_FACTS_MODULES=package_facts,setup ansible all -i myhost, -m gather_facts --tree facts >/dev/null 2>&1
grep '"ansible_distribution":\|"packages":' -o facts -r
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect that the created json facts file contains the facts both from the `setup` and `package_facts` facts gathering modules, i.e. the above two greps should yield output similar to:
```
facts/myhost:"ansible_distribution":
facts/myhost:"packages":
facts/myhost:"ansible_distribution":
facts/myhost:"packages":
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Note how I only get the results from the `package_facts` fact gathering module - even when I reverse the list of fact gathering modules. The results from the `setup` gathering module are missing.
<!--- Paste verbatim command output between quotes -->
```paste below
facts/myhost:"packages":
facts/myhost:"packages":
```
AFAICS, the result that the order of the FACTS_MODULES doesn't make a difference is caused by the fact that they are executed in parallel, by default.
|
https://github.com/ansible/ansible/issues/68532
|
https://github.com/ansible/ansible/pull/68987
|
fe941a4045861bfe87340381e7992bcecdbc0291
|
9281148b623d4e2e8302778d91af3e84ab9579a9
| 2020-03-28T19:58:57Z |
python
| 2020-05-20T22:53:37Z |
test/integration/targets/gathering_facts/aliases
|
shippable/posix/group3
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,532 |
gather_facts does not combine results of multiple fact modules
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When executing the gather_facts module with a list of fact gathering modules the results aren't combined. Instead, the result from one fact gathering module replaces all others.
This is caused by gather_facts calling `combine_vars()`
https://github.com/ansible/ansible/blob/0f5a63f1b99e586f86932907c49b1e8877128957/lib/ansible/plugins/action/gather_facts.py#L49-L55
which defaults to doing a replacement instead of a merge:
https://github.com/ansible/ansible/blob/0f5a63f1b99e586f86932907c49b1e8877128957/lib/ansible/utils/vars.py#L80-L92
AFAICS, in the context of this gather_facts module the replacement strategy doesn't make any sense.
Thus I suggest to replace the `combine_vars()` calls in `gather_facts.py` with `merge_hash()`.
See also @bcoca's gather_facts pull request #49399 which introduced the possibility to configure multiple fact gathering modules and execute them in parallel.
Sure, one can kind of work around this by also setting `ANSIBLE_HASH_BEHAVIOUR=merge` but then you get the merge behavior everywhere, i.e. also when overwriting variables during playbook execution etc. So that isn't a real work around, at all.
Again, I can't see a use case where the default replace behavior would be useful when using `gather_facts` since _the_ goal of configuring multiple facts modules is to have their results combined.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
gather_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /home/gms/program/mailserver/ansible.cfg
configured module search path = ['/home/gms/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
- Fedora 31
- ansible-2.9.6-1.fc31.noarch
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
ANSIBLE_FACTS_MODULES=setup,package_facts ansible all -i myhost, -m gather_facts --tree facts >/dev/null 2>&1
grep '"ansible_distribution":\|"packages":' -o facts -r
ANSIBLE_FACTS_MODULES=package_facts,setup ansible all -i myhost, -m gather_facts --tree facts >/dev/null 2>&1
grep '"ansible_distribution":\|"packages":' -o facts -r
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect that the created json facts file contains the facts both from the `setup` and `package_facts` facts gathering modules, i.e. the above two greps should yield output similar to:
```
facts/myhost:"ansible_distribution":
facts/myhost:"packages":
facts/myhost:"ansible_distribution":
facts/myhost:"packages":
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Note how I only get the results from the `package_facts` fact gathering module - even when I reverse the list of fact gathering modules. The results from the `setup` gathering module are missing.
<!--- Paste verbatim command output between quotes -->
```paste below
facts/myhost:"packages":
facts/myhost:"packages":
```
AFAICS, the result that the order of the FACTS_MODULES doesn't make a difference is caused by the fact that they are executed in parallel, by default.
|
https://github.com/ansible/ansible/issues/68532
|
https://github.com/ansible/ansible/pull/68987
|
fe941a4045861bfe87340381e7992bcecdbc0291
|
9281148b623d4e2e8302778d91af3e84ab9579a9
| 2020-03-28T19:58:57Z |
python
| 2020-05-20T22:53:37Z |
test/integration/targets/gathering_facts/library/facts_one
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,532 |
gather_facts does not combine results of multiple fact modules
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When executing the gather_facts module with a list of fact gathering modules the results aren't combined. Instead, the result from one fact gathering module replaces all others.
This is caused by gather_facts calling `combine_vars()`
https://github.com/ansible/ansible/blob/0f5a63f1b99e586f86932907c49b1e8877128957/lib/ansible/plugins/action/gather_facts.py#L49-L55
which defaults to doing a replacement instead of a merge:
https://github.com/ansible/ansible/blob/0f5a63f1b99e586f86932907c49b1e8877128957/lib/ansible/utils/vars.py#L80-L92
AFAICS, in the context of this gather_facts module the replacement strategy doesn't make any sense.
Thus I suggest to replace the `combine_vars()` calls in `gather_facts.py` with `merge_hash()`.
See also @bcoca's gather_facts pull request #49399 which introduced the possibility to configure multiple fact gathering modules and execute them in parallel.
Sure, one can kind of work around this by also setting `ANSIBLE_HASH_BEHAVIOUR=merge` but then you get the merge behavior everywhere, i.e. also when overwriting variables during playbook execution etc. So that isn't a real work around, at all.
Again, I can't see a use case where the default replace behavior would be useful when using `gather_facts` since _the_ goal of configuring multiple facts modules is to have their results combined.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
gather_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /home/gms/program/mailserver/ansible.cfg
configured module search path = ['/home/gms/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
- Fedora 31
- ansible-2.9.6-1.fc31.noarch
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
ANSIBLE_FACTS_MODULES=setup,package_facts ansible all -i myhost, -m gather_facts --tree facts >/dev/null 2>&1
grep '"ansible_distribution":\|"packages":' -o facts -r
ANSIBLE_FACTS_MODULES=package_facts,setup ansible all -i myhost, -m gather_facts --tree facts >/dev/null 2>&1
grep '"ansible_distribution":\|"packages":' -o facts -r
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect that the created json facts file contains the facts both from the `setup` and `package_facts` facts gathering modules, i.e. the above two greps should yield output similar to:
```
facts/myhost:"ansible_distribution":
facts/myhost:"packages":
facts/myhost:"ansible_distribution":
facts/myhost:"packages":
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Note how I only get the results from the `package_facts` fact gathering module - even when I reverse the list of fact gathering modules. The results from the `setup` gathering module are missing.
<!--- Paste verbatim command output between quotes -->
```paste below
facts/myhost:"packages":
facts/myhost:"packages":
```
AFAICS, the result that the order of the FACTS_MODULES doesn't make a difference is caused by the fact that they are executed in parallel, by default.
|
https://github.com/ansible/ansible/issues/68532
|
https://github.com/ansible/ansible/pull/68987
|
fe941a4045861bfe87340381e7992bcecdbc0291
|
9281148b623d4e2e8302778d91af3e84ab9579a9
| 2020-03-28T19:58:57Z |
python
| 2020-05-20T22:53:37Z |
test/integration/targets/gathering_facts/library/facts_two
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,532 |
gather_facts does not combine results of multiple fact modules
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When executing the gather_facts module with a list of fact gathering modules the results aren't combined. Instead, the result from one fact gathering module replaces all others.
This is caused by gather_facts calling `combine_vars()`
https://github.com/ansible/ansible/blob/0f5a63f1b99e586f86932907c49b1e8877128957/lib/ansible/plugins/action/gather_facts.py#L49-L55
which defaults to doing a replacement instead of a merge:
https://github.com/ansible/ansible/blob/0f5a63f1b99e586f86932907c49b1e8877128957/lib/ansible/utils/vars.py#L80-L92
AFAICS, in the context of this gather_facts module the replacement strategy doesn't make any sense.
Thus I suggest to replace the `combine_vars()` calls in `gather_facts.py` with `merge_hash()`.
See also @bcoca's gather_facts pull request #49399 which introduced the possibility to configure multiple fact gathering modules and execute them in parallel.
Sure, one can kind of work around this by also setting `ANSIBLE_HASH_BEHAVIOUR=merge` but then you get the merge behavior everywhere, i.e. also when overwriting variables during playbook execution etc. So that isn't a real work around, at all.
Again, I can't see a use case where the default replace behavior would be useful when using `gather_facts` since _the_ goal of configuring multiple facts modules is to have their results combined.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
gather_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /home/gms/program/mailserver/ansible.cfg
configured module search path = ['/home/gms/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
- Fedora 31
- ansible-2.9.6-1.fc31.noarch
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
ANSIBLE_FACTS_MODULES=setup,package_facts ansible all -i myhost, -m gather_facts --tree facts >/dev/null 2>&1
grep '"ansible_distribution":\|"packages":' -o facts -r
ANSIBLE_FACTS_MODULES=package_facts,setup ansible all -i myhost, -m gather_facts --tree facts >/dev/null 2>&1
grep '"ansible_distribution":\|"packages":' -o facts -r
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect that the created json facts file contains the facts both from the `setup` and `package_facts` facts gathering modules, i.e. the above two greps should yield output similar to:
```
facts/myhost:"ansible_distribution":
facts/myhost:"packages":
facts/myhost:"ansible_distribution":
facts/myhost:"packages":
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Note how I only get the results from the `package_facts` fact gathering module - even when I reverse the list of fact gathering modules. The results from the `setup` gathering module are missing.
<!--- Paste verbatim command output between quotes -->
```paste below
facts/myhost:"packages":
facts/myhost:"packages":
```
AFAICS, the result that the order of the FACTS_MODULES doesn't make a difference is caused by the fact that they are executed in parallel, by default.
|
https://github.com/ansible/ansible/issues/68532
|
https://github.com/ansible/ansible/pull/68987
|
fe941a4045861bfe87340381e7992bcecdbc0291
|
9281148b623d4e2e8302778d91af3e84ab9579a9
| 2020-03-28T19:58:57Z |
python
| 2020-05-20T22:53:37Z |
test/integration/targets/gathering_facts/one_two.json
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,532 |
gather_facts does not combine results of multiple fact modules
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When executing the gather_facts module with a list of fact gathering modules the results aren't combined. Instead, the result from one fact gathering module replaces all others.
This is caused by gather_facts calling `combine_vars()`
https://github.com/ansible/ansible/blob/0f5a63f1b99e586f86932907c49b1e8877128957/lib/ansible/plugins/action/gather_facts.py#L49-L55
which defaults to doing a replacement instead of a merge:
https://github.com/ansible/ansible/blob/0f5a63f1b99e586f86932907c49b1e8877128957/lib/ansible/utils/vars.py#L80-L92
AFAICS, in the context of this gather_facts module the replacement strategy doesn't make any sense.
Thus I suggest to replace the `combine_vars()` calls in `gather_facts.py` with `merge_hash()`.
See also @bcoca's gather_facts pull request #49399 which introduced the possibility to configure multiple fact gathering modules and execute them in parallel.
Sure, one can kind of work around this by also setting `ANSIBLE_HASH_BEHAVIOUR=merge` but then you get the merge behavior everywhere, i.e. also when overwriting variables during playbook execution etc. So that isn't a real work around, at all.
Again, I can't see a use case where the default replace behavior would be useful when using `gather_facts` since _the_ goal of configuring multiple facts modules is to have their results combined.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
gather_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /home/gms/program/mailserver/ansible.cfg
configured module search path = ['/home/gms/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
- Fedora 31
- ansible-2.9.6-1.fc31.noarch
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
ANSIBLE_FACTS_MODULES=setup,package_facts ansible all -i myhost, -m gather_facts --tree facts >/dev/null 2>&1
grep '"ansible_distribution":\|"packages":' -o facts -r
ANSIBLE_FACTS_MODULES=package_facts,setup ansible all -i myhost, -m gather_facts --tree facts >/dev/null 2>&1
grep '"ansible_distribution":\|"packages":' -o facts -r
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect that the created json facts file contains the facts both from the `setup` and `package_facts` facts gathering modules, i.e. the above two greps should yield output similar to:
```
facts/myhost:"ansible_distribution":
facts/myhost:"packages":
facts/myhost:"ansible_distribution":
facts/myhost:"packages":
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Note how I only get the results from the `package_facts` fact gathering module - even when I reverse the list of fact gathering modules. The results from the `setup` gathering module are missing.
<!--- Paste verbatim command output between quotes -->
```paste below
facts/myhost:"packages":
facts/myhost:"packages":
```
AFAICS, the result that the order of the FACTS_MODULES doesn't make a difference is caused by the fact that they are executed in parallel, by default.
|
https://github.com/ansible/ansible/issues/68532
|
https://github.com/ansible/ansible/pull/68987
|
fe941a4045861bfe87340381e7992bcecdbc0291
|
9281148b623d4e2e8302778d91af3e84ab9579a9
| 2020-03-28T19:58:57Z |
python
| 2020-05-20T22:53:37Z |
test/integration/targets/gathering_facts/runme.sh
|
#!/usr/bin/env bash
set -eux
# ANSIBLE_CACHE_PLUGINS=cache_plugins/ ANSIBLE_CACHE_PLUGIN=none ansible-playbook test_gathering_facts.yml -i inventory -v "$@"
ansible-playbook test_gathering_facts.yml -i inventory -v "$@"
# ANSIBLE_CACHE_PLUGIN=base ansible-playbook test_gathering_facts.yml -i inventory -v "$@"
ANSIBLE_GATHERING=smart ansible-playbook test_run_once.yml -i inventory -v "$@"
# ensure clean_facts is working properly
ansible-playbook test_prevent_injection.yml -i inventory -v "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,532 |
gather_facts does not combine results of multiple fact modules
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When executing the gather_facts module with a list of fact gathering modules the results aren't combined. Instead, the result from one fact gathering module replaces all others.
This is caused by gather_facts calling `combine_vars()`
https://github.com/ansible/ansible/blob/0f5a63f1b99e586f86932907c49b1e8877128957/lib/ansible/plugins/action/gather_facts.py#L49-L55
which defaults to doing a replacement instead of a merge:
https://github.com/ansible/ansible/blob/0f5a63f1b99e586f86932907c49b1e8877128957/lib/ansible/utils/vars.py#L80-L92
AFAICS, in the context of this gather_facts module the replacement strategy doesn't make any sense.
Thus I suggest to replace the `combine_vars()` calls in `gather_facts.py` with `merge_hash()`.
See also @bcoca's gather_facts pull request #49399 which introduced the possibility to configure multiple fact gathering modules and execute them in parallel.
Sure, one can kind of work around this by also setting `ANSIBLE_HASH_BEHAVIOUR=merge` but then you get the merge behavior everywhere, i.e. also when overwriting variables during playbook execution etc. So that isn't a real work around, at all.
Again, I can't see a use case where the default replace behavior would be useful when using `gather_facts` since _the_ goal of configuring multiple facts modules is to have their results combined.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
gather_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /home/gms/program/mailserver/ansible.cfg
configured module search path = ['/home/gms/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
- Fedora 31
- ansible-2.9.6-1.fc31.noarch
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
ANSIBLE_FACTS_MODULES=setup,package_facts ansible all -i myhost, -m gather_facts --tree facts >/dev/null 2>&1
grep '"ansible_distribution":\|"packages":' -o facts -r
ANSIBLE_FACTS_MODULES=package_facts,setup ansible all -i myhost, -m gather_facts --tree facts >/dev/null 2>&1
grep '"ansible_distribution":\|"packages":' -o facts -r
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect that the created json facts file contains the facts both from the `setup` and `package_facts` facts gathering modules, i.e. the above two greps should yield output similar to:
```
facts/myhost:"ansible_distribution":
facts/myhost:"packages":
facts/myhost:"ansible_distribution":
facts/myhost:"packages":
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Note how I only get the results from the `package_facts` fact gathering module - even when I reverse the list of fact gathering modules. The results from the `setup` gathering module are missing.
<!--- Paste verbatim command output between quotes -->
```paste below
facts/myhost:"packages":
facts/myhost:"packages":
```
AFAICS, the result that the order of the FACTS_MODULES doesn't make a difference is caused by the fact that they are executed in parallel, by default.
|
https://github.com/ansible/ansible/issues/68532
|
https://github.com/ansible/ansible/pull/68987
|
fe941a4045861bfe87340381e7992bcecdbc0291
|
9281148b623d4e2e8302778d91af3e84ab9579a9
| 2020-03-28T19:58:57Z |
python
| 2020-05-20T22:53:37Z |
test/integration/targets/gathering_facts/two_one.json
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,532 |
gather_facts does not combine results of multiple fact modules
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When executing the gather_facts module with a list of fact gathering modules the results aren't combined. Instead, the result from one fact gathering module replaces all others.
This is caused by gather_facts calling `combine_vars()`
https://github.com/ansible/ansible/blob/0f5a63f1b99e586f86932907c49b1e8877128957/lib/ansible/plugins/action/gather_facts.py#L49-L55
which defaults to doing a replacement instead of a merge:
https://github.com/ansible/ansible/blob/0f5a63f1b99e586f86932907c49b1e8877128957/lib/ansible/utils/vars.py#L80-L92
AFAICS, in the context of this gather_facts module the replacement strategy doesn't make any sense.
Thus I suggest to replace the `combine_vars()` calls in `gather_facts.py` with `merge_hash()`.
See also @bcoca's gather_facts pull request #49399 which introduced the possibility to configure multiple fact gathering modules and execute them in parallel.
Sure, one can kind of work around this by also setting `ANSIBLE_HASH_BEHAVIOUR=merge` but then you get the merge behavior everywhere, i.e. also when overwriting variables during playbook execution etc. So that isn't a real work around, at all.
Again, I can't see a use case where the default replace behavior would be useful when using `gather_facts` since _the_ goal of configuring multiple facts modules is to have their results combined.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
gather_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /home/gms/program/mailserver/ansible.cfg
configured module search path = ['/home/gms/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
- Fedora 31
- ansible-2.9.6-1.fc31.noarch
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
ANSIBLE_FACTS_MODULES=setup,package_facts ansible all -i myhost, -m gather_facts --tree facts >/dev/null 2>&1
grep '"ansible_distribution":\|"packages":' -o facts -r
ANSIBLE_FACTS_MODULES=package_facts,setup ansible all -i myhost, -m gather_facts --tree facts >/dev/null 2>&1
grep '"ansible_distribution":\|"packages":' -o facts -r
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect that the created json facts file contains the facts both from the `setup` and `package_facts` facts gathering modules, i.e. the above two greps should yield output similar to:
```
facts/myhost:"ansible_distribution":
facts/myhost:"packages":
facts/myhost:"ansible_distribution":
facts/myhost:"packages":
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Note how I only get the results from the `package_facts` fact gathering module - even when I reverse the list of fact gathering modules. The results from the `setup` gathering module are missing.
<!--- Paste verbatim command output between quotes -->
```paste below
facts/myhost:"packages":
facts/myhost:"packages":
```
AFAICS, the result that the order of the FACTS_MODULES doesn't make a difference is caused by the fact that they are executed in parallel, by default.
|
https://github.com/ansible/ansible/issues/68532
|
https://github.com/ansible/ansible/pull/68987
|
fe941a4045861bfe87340381e7992bcecdbc0291
|
9281148b623d4e2e8302778d91af3e84ab9579a9
| 2020-03-28T19:58:57Z |
python
| 2020-05-20T22:53:37Z |
test/integration/targets/gathering_facts/verify_merge_facts.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,532 |
gather_facts does not combine results of multiple fact modules
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When executing the gather_facts module with a list of fact gathering modules the results aren't combined. Instead, the result from one fact gathering module replaces all others.
This is caused by gather_facts calling `combine_vars()`
https://github.com/ansible/ansible/blob/0f5a63f1b99e586f86932907c49b1e8877128957/lib/ansible/plugins/action/gather_facts.py#L49-L55
which defaults to doing a replacement instead of a merge:
https://github.com/ansible/ansible/blob/0f5a63f1b99e586f86932907c49b1e8877128957/lib/ansible/utils/vars.py#L80-L92
AFAICS, in the context of this gather_facts module the replacement strategy doesn't make any sense.
Thus I suggest to replace the `combine_vars()` calls in `gather_facts.py` with `merge_hash()`.
See also @bcoca's gather_facts pull request #49399 which introduced the possibility to configure multiple fact gathering modules and execute them in parallel.
Sure, one can kind of work around this by also setting `ANSIBLE_HASH_BEHAVIOUR=merge` but then you get the merge behavior everywhere, i.e. also when overwriting variables during playbook execution etc. So that isn't a real work around, at all.
Again, I can't see a use case where the default replace behavior would be useful when using `gather_facts` since _the_ goal of configuring multiple facts modules is to have their results combined.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
gather_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /home/gms/program/mailserver/ansible.cfg
configured module search path = ['/home/gms/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
- Fedora 31
- ansible-2.9.6-1.fc31.noarch
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
ANSIBLE_FACTS_MODULES=setup,package_facts ansible all -i myhost, -m gather_facts --tree facts >/dev/null 2>&1
grep '"ansible_distribution":\|"packages":' -o facts -r
ANSIBLE_FACTS_MODULES=package_facts,setup ansible all -i myhost, -m gather_facts --tree facts >/dev/null 2>&1
grep '"ansible_distribution":\|"packages":' -o facts -r
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect that the created json facts file contains the facts both from the `setup` and `package_facts` facts gathering modules, i.e. the above two greps should yield output similar to:
```
facts/myhost:"ansible_distribution":
facts/myhost:"packages":
facts/myhost:"ansible_distribution":
facts/myhost:"packages":
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Note how I only get the results from the `package_facts` fact gathering module - even when I reverse the list of fact gathering modules. The results from the `setup` gathering module are missing.
<!--- Paste verbatim command output between quotes -->
```paste below
facts/myhost:"packages":
facts/myhost:"packages":
```
AFAICS, the result that the order of the FACTS_MODULES doesn't make a difference is caused by the fact that they are executed in parallel, by default.
|
https://github.com/ansible/ansible/issues/68532
|
https://github.com/ansible/ansible/pull/68987
|
fe941a4045861bfe87340381e7992bcecdbc0291
|
9281148b623d4e2e8302778d91af3e84ab9579a9
| 2020-03-28T19:58:57Z |
python
| 2020-05-20T22:53:37Z |
test/sanity/ignore.txt
|
docs/bin/find-plugin-refs.py future-import-boilerplate
docs/bin/find-plugin-refs.py metaclass-boilerplate
docs/docsite/_extensions/pygments_lexer.py future-import-boilerplate
docs/docsite/_extensions/pygments_lexer.py metaclass-boilerplate
docs/docsite/_themes/sphinx_rtd_theme/__init__.py future-import-boilerplate
docs/docsite/_themes/sphinx_rtd_theme/__init__.py metaclass-boilerplate
docs/docsite/rst/conf.py future-import-boilerplate
docs/docsite/rst/conf.py metaclass-boilerplate
docs/docsite/rst/dev_guide/testing/sanity/no-smart-quotes.rst no-smart-quotes
examples/play.yml shebang
examples/scripts/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSUseApprovedVerbs
examples/scripts/uptime.py future-import-boilerplate
examples/scripts/uptime.py metaclass-boilerplate
hacking/build-ansible.py shebang # only run by release engineers, Python 3.6+ required
hacking/build_library/build_ansible/announce.py compile-2.6!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/announce.py compile-2.7!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/announce.py compile-3.5!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/commands.py compile-2.6!skip # release and docs process only, 3.6+ required
hacking/build_library/build_ansible/commands.py compile-2.7!skip # release and docs process only, 3.6+ required
hacking/build_library/build_ansible/commands.py compile-3.5!skip # release and docs process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_config.py compile-2.6!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_config.py compile-2.7!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_config.py compile-3.5!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_keywords.py compile-2.6!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_keywords.py compile-2.7!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_keywords.py compile-3.5!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/generate_man.py compile-2.6!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/generate_man.py compile-2.7!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/generate_man.py compile-3.5!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/plugin_formatter.py compile-2.6!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/plugin_formatter.py compile-2.7!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/plugin_formatter.py compile-3.5!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-2.6!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-2.7!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-3.5!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-2.6!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-2.7!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-3.5!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/update_intersphinx.py compile-2.6!skip # release process and docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/update_intersphinx.py compile-2.7!skip # release process and docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/update_intersphinx.py compile-3.5!skip # release process and docs build only, 3.6+ required
hacking/fix_test_syntax.py future-import-boilerplate
hacking/fix_test_syntax.py metaclass-boilerplate
hacking/get_library.py future-import-boilerplate
hacking/get_library.py metaclass-boilerplate
hacking/report.py future-import-boilerplate
hacking/report.py metaclass-boilerplate
hacking/return_skeleton_generator.py future-import-boilerplate
hacking/return_skeleton_generator.py metaclass-boilerplate
hacking/test-module.py future-import-boilerplate
hacking/test-module.py metaclass-boilerplate
hacking/tests/gen_distribution_version_testcase.py future-import-boilerplate
hacking/tests/gen_distribution_version_testcase.py metaclass-boilerplate
lib/ansible/cli/console.py pylint:blacklisted-name
lib/ansible/cli/scripts/ansible_cli_stub.py shebang
lib/ansible/cli/scripts/ansible_connection_cli_stub.py shebang
lib/ansible/config/base.yml no-unwanted-files
lib/ansible/config/module_defaults.yml no-unwanted-files
lib/ansible/executor/playbook_executor.py pylint:blacklisted-name
lib/ansible/executor/powershell/async_watchdog.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/async_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/exec_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/task_queue_manager.py pylint:blacklisted-name
lib/ansible/module_utils/_text.py future-import-boilerplate
lib/ansible/module_utils/_text.py metaclass-boilerplate
lib/ansible/module_utils/api.py future-import-boilerplate
lib/ansible/module_utils/api.py metaclass-boilerplate
lib/ansible/module_utils/basic.py metaclass-boilerplate
lib/ansible/module_utils/common/network.py future-import-boilerplate
lib/ansible/module_utils/common/network.py metaclass-boilerplate
lib/ansible/module_utils/compat/_selectors2.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/compat/_selectors2.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/compat/_selectors2.py pylint:blacklisted-name
lib/ansible/module_utils/connection.py future-import-boilerplate
lib/ansible/module_utils/connection.py metaclass-boilerplate
lib/ansible/module_utils/distro/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/distro/_distro.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py no-assert
lib/ansible/module_utils/distro/_distro.py pep8!skip # bundled code we don't want to modify
lib/ansible/module_utils/facts/__init__.py empty-init # breaks namespacing, deprecate and eventually remove
lib/ansible/module_utils/facts/network/linux.py pylint:blacklisted-name
lib/ansible/module_utils/facts/sysctl.py future-import-boilerplate
lib/ansible/module_utils/facts/sysctl.py metaclass-boilerplate
lib/ansible/module_utils/facts/system/distribution.py pylint:ansible-bad-function
lib/ansible/module_utils/facts/utils.py future-import-boilerplate
lib/ansible/module_utils/facts/utils.py metaclass-boilerplate
lib/ansible/module_utils/json_utils.py future-import-boilerplate
lib/ansible/module_utils/json_utils.py metaclass-boilerplate
lib/ansible/module_utils/parsing/convert_bool.py future-import-boilerplate
lib/ansible/module_utils/parsing/convert_bool.py metaclass-boilerplate
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.ArgvParser.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSProvideCommentHelp # need to agree on best format for comment location
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSProvideCommentHelp
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.LinkUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/pycompat24.py future-import-boilerplate
lib/ansible/module_utils/pycompat24.py metaclass-boilerplate
lib/ansible/module_utils/pycompat24.py no-get-exception
lib/ansible/module_utils/service.py future-import-boilerplate
lib/ansible/module_utils/service.py metaclass-boilerplate
lib/ansible/module_utils/six/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/six/__init__.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py no-basestring
lib/ansible/module_utils/six/__init__.py no-dict-iteritems
lib/ansible/module_utils/six/__init__.py no-dict-iterkeys
lib/ansible/module_utils/six/__init__.py no-dict-itervalues
lib/ansible/module_utils/six/__init__.py replace-urlopen
lib/ansible/module_utils/splitter.py future-import-boilerplate
lib/ansible/module_utils/splitter.py metaclass-boilerplate
lib/ansible/module_utils/urls.py future-import-boilerplate
lib/ansible/module_utils/urls.py metaclass-boilerplate
lib/ansible/module_utils/urls.py pylint:blacklisted-name
lib/ansible/module_utils/urls.py replace-urlopen
lib/ansible/module_utils/yumdnf.py future-import-boilerplate
lib/ansible/module_utils/yumdnf.py metaclass-boilerplate
lib/ansible/modules/command.py validate-modules:doc-missing-type
lib/ansible/modules/command.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/command.py validate-modules:undocumented-parameter
lib/ansible/modules/expect.py validate-modules:doc-missing-type
lib/ansible/modules/assemble.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/blockinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/blockinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/copy.py pylint:blacklisted-name
lib/ansible/modules/copy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/copy.py validate-modules:doc-type-does-not-match-spec
lib/ansible/modules/copy.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/copy.py validate-modules:undocumented-parameter
lib/ansible/modules/file.py pylint:ansible-bad-function
lib/ansible/modules/file.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/file.py validate-modules:undocumented-parameter
lib/ansible/modules/find.py use-argspec-type-path # fix needed
lib/ansible/modules/find.py validate-modules:parameter-list-no-elements
lib/ansible/modules/find.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/lineinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/replace.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/stat.py validate-modules:parameter-invalid
lib/ansible/modules/stat.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/stat.py validate-modules:undocumented-parameter
lib/ansible/modules/unarchive.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/unarchive.py validate-modules:parameter-list-no-elements
lib/ansible/modules/get_url.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/uri.py pylint:blacklisted-name
lib/ansible/modules/uri.py validate-modules:doc-required-mismatch
lib/ansible/modules/uri.py validate-modules:parameter-list-no-elements
lib/ansible/modules/uri.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/pip.py pylint:blacklisted-name
lib/ansible/modules/pip.py validate-modules:doc-elements-mismatch
lib/ansible/modules/pip.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/apt.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/apt.py validate-modules:parameter-invalid
lib/ansible/modules/apt.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/apt.py validate-modules:undocumented-parameter
lib/ansible/modules/apt_key.py validate-modules:mutually_exclusive-unknown
lib/ansible/modules/apt_key.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/apt_key.py validate-modules:undocumented-parameter
lib/ansible/modules/apt_repository.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/apt_repository.py validate-modules:parameter-invalid
lib/ansible/modules/apt_repository.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/apt_repository.py validate-modules:undocumented-parameter
lib/ansible/modules/dnf.py validate-modules:doc-missing-type
lib/ansible/modules/dnf.py validate-modules:doc-required-mismatch
lib/ansible/modules/dnf.py validate-modules:parameter-invalid
lib/ansible/modules/dnf.py validate-modules:parameter-list-no-elements
lib/ansible/modules/dnf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/dpkg_selections.py validate-modules:doc-missing-type
lib/ansible/modules/dpkg_selections.py validate-modules:doc-required-mismatch
lib/ansible/modules/package_facts.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/package_facts.py validate-modules:doc-missing-type
lib/ansible/modules/package_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/package_facts.py validate-modules:return-syntax-error
lib/ansible/modules/rpm_key.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/yum.py pylint:blacklisted-name
lib/ansible/modules/yum.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/yum.py validate-modules:doc-missing-type
lib/ansible/modules/yum.py validate-modules:parameter-invalid
lib/ansible/modules/yum.py validate-modules:parameter-list-no-elements
lib/ansible/modules/yum.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/yum.py validate-modules:undocumented-parameter
lib/ansible/modules/yum_repository.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/yum_repository.py validate-modules:doc-missing-type
lib/ansible/modules/yum_repository.py validate-modules:parameter-list-no-elements
lib/ansible/modules/yum_repository.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/yum_repository.py validate-modules:undocumented-parameter
lib/ansible/modules/git.py pylint:blacklisted-name
lib/ansible/modules/git.py use-argspec-type-path
lib/ansible/modules/git.py validate-modules:doc-missing-type
lib/ansible/modules/git.py validate-modules:doc-required-mismatch
lib/ansible/modules/git.py validate-modules:parameter-list-no-elements
lib/ansible/modules/git.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/subversion.py validate-modules:doc-required-mismatch
lib/ansible/modules/subversion.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/subversion.py validate-modules:undocumented-parameter
lib/ansible/modules/getent.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/hostname.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/hostname.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/iptables.py pylint:blacklisted-name
lib/ansible/modules/iptables.py validate-modules:parameter-list-no-elements
lib/ansible/modules/known_hosts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/known_hosts.py validate-modules:doc-missing-type
lib/ansible/modules/known_hosts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/service.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/service.py validate-modules:use-run-command-not-popen
lib/ansible/modules/setup.py validate-modules:doc-missing-type
lib/ansible/modules/setup.py validate-modules:parameter-list-no-elements
lib/ansible/modules/setup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/systemd.py validate-modules:parameter-invalid
lib/ansible/modules/systemd.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/systemd.py validate-modules:return-syntax-error
lib/ansible/modules/sysvinit.py validate-modules:parameter-list-no-elements
lib/ansible/modules/sysvinit.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/sysvinit.py validate-modules:return-syntax-error
lib/ansible/modules/user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/user.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/user.py validate-modules:parameter-list-no-elements
lib/ansible/modules/user.py validate-modules:use-run-command-not-popen
lib/ansible/modules/async_status.py use-argspec-type-path
lib/ansible/modules/async_status.py validate-modules!skip
lib/ansible/modules/async_wrapper.py ansible-doc!skip # not an actual module
lib/ansible/modules/async_wrapper.py pylint:ansible-bad-function
lib/ansible/modules/async_wrapper.py use-argspec-type-path
lib/ansible/modules/wait_for.py validate-modules:parameter-list-no-elements
lib/ansible/parsing/vault/__init__.py pylint:blacklisted-name
lib/ansible/playbook/base.py pylint:blacklisted-name
lib/ansible/playbook/collectionsearch.py required-and-default-attributes # https://github.com/ansible/ansible/issues/61460
lib/ansible/playbook/helpers.py pylint:blacklisted-name
lib/ansible/playbook/role/__init__.py pylint:blacklisted-name
lib/ansible/plugins/action/normal.py action-plugin-docs # default action plugin for modules without a dedicated action plugin
lib/ansible/plugins/cache/base.py ansible-doc!skip # not a plugin, but a stub for backwards compatibility
lib/ansible/plugins/doc_fragments/backup.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/backup.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/constructed.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/constructed.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/decrypt.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/decrypt.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/default_callback.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/default_callback.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/files.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/files.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/inventory_cache.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/inventory_cache.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/return_common.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/return_common.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/shell_common.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/shell_common.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/shell_windows.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/shell_windows.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/url.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/url.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/validate.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/validate.py metaclass-boilerplate
lib/ansible/plugins/lookup/sequence.py pylint:blacklisted-name
lib/ansible/plugins/strategy/__init__.py pylint:blacklisted-name
lib/ansible/plugins/strategy/linear.py pylint:blacklisted-name
lib/ansible/vars/hostvars.py pylint:blacklisted-name
setup.py future-import-boilerplate
setup.py metaclass-boilerplate
test/integration/targets/ansible-runner/files/adhoc_example1.py future-import-boilerplate
test/integration/targets/ansible-runner/files/adhoc_example1.py metaclass-boilerplate
test/integration/targets/ansible-runner/files/playbook_example1.py future-import-boilerplate
test/integration/targets/ansible-runner/files/playbook_example1.py metaclass-boilerplate
test/integration/targets/ansible-test/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
test/integration/targets/async/library/async_test.py future-import-boilerplate
test/integration/targets/async/library/async_test.py metaclass-boilerplate
test/integration/targets/async_fail/library/async_test.py future-import-boilerplate
test/integration/targets/async_fail/library/async_test.py metaclass-boilerplate
test/integration/targets/collections_plugin_namespace/collection_root/ansible_collections/my_ns/my_col/plugins/lookup/lookup_no_future_boilerplate.py future-import-boilerplate
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util2.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util3.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/modules/my_module.py pylint:relative-beyond-top-level
test/integration/targets/expect/files/test_command.py future-import-boilerplate
test/integration/targets/expect/files/test_command.py metaclass-boilerplate
test/integration/targets/gathering_facts/library/bogus_facts shebang
test/integration/targets/get_url/files/testserver.py future-import-boilerplate
test/integration/targets/get_url/files/testserver.py metaclass-boilerplate
test/integration/targets/group/files/gidget.py future-import-boilerplate
test/integration/targets/group/files/gidget.py metaclass-boilerplate
test/integration/targets/ignore_unreachable/fake_connectors/bad_exec.py future-import-boilerplate
test/integration/targets/ignore_unreachable/fake_connectors/bad_exec.py metaclass-boilerplate
test/integration/targets/ignore_unreachable/fake_connectors/bad_put_file.py future-import-boilerplate
test/integration/targets/ignore_unreachable/fake_connectors/bad_put_file.py metaclass-boilerplate
test/integration/targets/incidental_script_inventory_vmware_inventory/vmware_inventory.py future-import-boilerplate
test/integration/targets/incidental_script_inventory_vmware_inventory/vmware_inventory.py metaclass-boilerplate
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xSetReboot/ANSIBLE_xSetReboot.psm1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/xTestDsc.psd1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.1/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.1/xTestDsc.psd1 pslint!skip
test/integration/targets/incidental_win_ping/library/win_ping_syntax_error.ps1 pslint!skip
test/integration/targets/incidental_win_reboot/templates/post_reboot.ps1 pslint!skip
test/integration/targets/lookup_ini/lookup-8859-15.ini no-smart-quotes
test/integration/targets/module_precedence/lib_with_extension/a.ini shebang
test/integration/targets/module_precedence/lib_with_extension/ping.ini shebang
test/integration/targets/module_precedence/lib_with_extension/ping.py future-import-boilerplate
test/integration/targets/module_precedence/lib_with_extension/ping.py metaclass-boilerplate
test/integration/targets/module_precedence/multiple_roles/bar/library/ping.py future-import-boilerplate
test/integration/targets/module_precedence/multiple_roles/bar/library/ping.py metaclass-boilerplate
test/integration/targets/module_precedence/multiple_roles/foo/library/ping.py future-import-boilerplate
test/integration/targets/module_precedence/multiple_roles/foo/library/ping.py metaclass-boilerplate
test/integration/targets/module_precedence/roles_with_extension/foo/library/a.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.py future-import-boilerplate
test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.py metaclass-boilerplate
test/integration/targets/module_utils/library/test.py future-import-boilerplate
test/integration/targets/module_utils/library/test.py metaclass-boilerplate
test/integration/targets/module_utils/library/test_env_override.py future-import-boilerplate
test/integration/targets/module_utils/library/test_env_override.py metaclass-boilerplate
test/integration/targets/module_utils/library/test_failure.py future-import-boilerplate
test/integration/targets/module_utils/library/test_failure.py metaclass-boilerplate
test/integration/targets/module_utils/library/test_override.py future-import-boilerplate
test/integration/targets/module_utils/library/test_override.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/bar0/foo.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/foo.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/sub/bar/__init__.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/sub/bar/bar.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/yak/zebra/foo.py pylint:blacklisted-name
test/integration/targets/old_style_modules_posix/library/helloworld.sh shebang
test/integration/targets/pause/test-pause.py future-import-boilerplate
test/integration/targets/pause/test-pause.py metaclass-boilerplate
test/integration/targets/pip/files/ansible_test_pip_chdir/__init__.py future-import-boilerplate
test/integration/targets/pip/files/ansible_test_pip_chdir/__init__.py metaclass-boilerplate
test/integration/targets/pip/files/setup.py future-import-boilerplate
test/integration/targets/pip/files/setup.py metaclass-boilerplate
test/integration/targets/run_modules/library/test.py future-import-boilerplate
test/integration/targets/run_modules/library/test.py metaclass-boilerplate
test/integration/targets/script/files/no_shebang.py future-import-boilerplate
test/integration/targets/script/files/no_shebang.py metaclass-boilerplate
test/integration/targets/service/files/ansible_test_service.py future-import-boilerplate
test/integration/targets/service/files/ansible_test_service.py metaclass-boilerplate
test/integration/targets/setup_rpm_repo/files/create-repo.py future-import-boilerplate
test/integration/targets/setup_rpm_repo/files/create-repo.py metaclass-boilerplate
test/integration/targets/template/files/encoding_1252_utf-8.expected no-smart-quotes
test/integration/targets/template/files/encoding_1252_windows-1252.expected no-smart-quotes
test/integration/targets/template/files/foo.dos.txt line-endings
test/integration/targets/template/role_filter/filter_plugins/myplugin.py future-import-boilerplate
test/integration/targets/template/role_filter/filter_plugins/myplugin.py metaclass-boilerplate
test/integration/targets/template/templates/encoding_1252.j2 no-smart-quotes
test/integration/targets/infra/library/test.py future-import-boilerplate
test/integration/targets/infra/library/test.py metaclass-boilerplate
test/integration/targets/unicode/unicode.yml no-smart-quotes
test/integration/targets/uri/files/testserver.py future-import-boilerplate
test/integration/targets/uri/files/testserver.py metaclass-boilerplate
test/integration/targets/var_precedence/ansible-var-precedence-check.py future-import-boilerplate
test/integration/targets/var_precedence/ansible-var-precedence-check.py metaclass-boilerplate
test/integration/targets/builtin_vars_prompt/test-vars_prompt.py future-import-boilerplate
test/integration/targets/builtin_vars_prompt/test-vars_prompt.py metaclass-boilerplate
test/integration/targets/vault/test-vault-client.py future-import-boilerplate
test/integration/targets/vault/test-vault-client.py metaclass-boilerplate
test/integration/targets/wait_for/files/testserver.py future-import-boilerplate
test/integration/targets/wait_for/files/testserver.py metaclass-boilerplate
test/integration/targets/want_json_modules_posix/library/helloworld.py future-import-boilerplate
test/integration/targets/want_json_modules_posix/library/helloworld.py metaclass-boilerplate
test/integration/targets/win_exec_wrapper/library/test_fail.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_exec_wrapper/tasks/main.yml no-smart-quotes # We are explicitly testing smart quote support for env vars
test/integration/targets/win_module_utils/library/legacy_only_new_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_module_utils/library/legacy_only_old_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_script/files/test_script.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_creates_file.ps1 pslint:PSAvoidUsingCmdletAliases
test/integration/targets/win_script/files/test_script_removes_file.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_script/files/test_script_with_args.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_with_splatting.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/windows-minimal/library/win_ping_syntax_error.ps1 pslint!skip
test/lib/ansible_test/_data/requirements/constraints.txt test-constraints
test/lib/ansible_test/_data/requirements/integration.cloud.azure.txt test-constraints
test/lib/ansible_test/_data/sanity/pylint/plugins/string_format.py use-compat-six
test/lib/ansible_test/_data/setup/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
test/lib/ansible_test/_data/setup/windows-httptester.ps1 pslint:PSCustomUseLiteralPath
test/support/integration/plugins/module_utils/ansible_tower.py future-import-boilerplate
test/support/integration/plugins/module_utils/ansible_tower.py metaclass-boilerplate
test/support/integration/plugins/module_utils/azure_rm_common.py future-import-boilerplate
test/support/integration/plugins/module_utils/azure_rm_common.py metaclass-boilerplate
test/support/integration/plugins/module_utils/azure_rm_common_rest.py future-import-boilerplate
test/support/integration/plugins/module_utils/azure_rm_common_rest.py metaclass-boilerplate
test/support/integration/plugins/module_utils/cloud.py future-import-boilerplate
test/support/integration/plugins/module_utils/cloud.py metaclass-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py future-import-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py metaclass-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py no-unicode-literals
test/support/integration/plugins/module_utils/database.py future-import-boilerplate
test/support/integration/plugins/module_utils/database.py metaclass-boilerplate
test/support/integration/plugins/module_utils/k8s/common.py metaclass-boilerplate
test/support/integration/plugins/module_utils/k8s/raw.py metaclass-boilerplate
test/support/integration/plugins/module_utils/mysql.py future-import-boilerplate
test/support/integration/plugins/module_utils/mysql.py metaclass-boilerplate
test/support/integration/plugins/module_utils/net_tools/nios/api.py future-import-boilerplate
test/support/integration/plugins/module_utils/net_tools/nios/api.py metaclass-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py future-import-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py metaclass-boilerplate
test/support/integration/plugins/module_utils/postgres.py future-import-boilerplate
test/support/integration/plugins/module_utils/postgres.py metaclass-boilerplate
test/support/integration/plugins/modules/lvg.py pylint:blacklisted-name
test/support/integration/plugins/modules/synchronize.py pylint:blacklisted-name
test/support/integration/plugins/modules/timezone.py pylint:blacklisted-name
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/doc_fragments/netconf.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/doc_fragments/netconf.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/doc_fragments/network_agnostic.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/doc_fragments/network_agnostic.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py no-unicode-literals
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py pep8:E203
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/cfg/base.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/cfg/base.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/config.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/config.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/facts/facts.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/facts/facts.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/netconf.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/netconf.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/network.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/network.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/parsing.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/parsing.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/utils.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/utils.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/netconf/netconf.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/netconf/netconf.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/restconf/restconf.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/restconf/restconf.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/doc_fragments/ios.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/doc_fragments/ios.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/module_utils/network/ios/ios.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/module_utils/network/ios/ios.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_command.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_command.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py pep8:E501
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/doc_fragments/vyos.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/doc_fragments/vyos.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/module_utils/network/vyos/vyos.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/module_utils/network/vyos/vyos.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pep8:E231
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pylint:blacklisted-name
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_config.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_config.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_facts.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_facts.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_logging.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_logging.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_static_route.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_static_route.py metaclass-boilerplate
test/support/windows-integration/plugins/modules/async_status.ps1 pslint!skip
test/support/windows-integration/plugins/modules/setup.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_copy.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_dsc.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_feature.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_find.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_lineinfile.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_regedit.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_security_policy.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_shell.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_wait_for.ps1 pslint!skip
test/units/config/manager/test_find_ini_config_file.py future-import-boilerplate
test/units/executor/test_play_iterator.py pylint:blacklisted-name
test/units/inventory/test_group.py future-import-boilerplate
test/units/inventory/test_group.py metaclass-boilerplate
test/units/inventory/test_host.py future-import-boilerplate
test/units/inventory/test_host.py metaclass-boilerplate
test/units/mock/path.py future-import-boilerplate
test/units/mock/path.py metaclass-boilerplate
test/units/mock/yaml_helper.py future-import-boilerplate
test/units/mock/yaml_helper.py metaclass-boilerplate
test/units/module_utils/basic/test__symbolic_mode_to_octal.py future-import-boilerplate
test/units/module_utils/basic/test_deprecate_warn.py future-import-boilerplate
test/units/module_utils/basic/test_deprecate_warn.py metaclass-boilerplate
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-no-version
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-version
test/units/module_utils/basic/test_exit_json.py future-import-boilerplate
test/units/module_utils/basic/test_get_file_attributes.py future-import-boilerplate
test/units/module_utils/basic/test_heuristic_log_sanitize.py future-import-boilerplate
test/units/module_utils/basic/test_run_command.py future-import-boilerplate
test/units/module_utils/basic/test_run_command.py pylint:blacklisted-name
test/units/module_utils/basic/test_safe_eval.py future-import-boilerplate
test/units/module_utils/basic/test_tmpdir.py future-import-boilerplate
test/units/module_utils/common/test_dict_transformations.py future-import-boilerplate
test/units/module_utils/common/test_dict_transformations.py metaclass-boilerplate
test/units/module_utils/conftest.py future-import-boilerplate
test/units/module_utils/conftest.py metaclass-boilerplate
test/units/module_utils/facts/base.py future-import-boilerplate
test/units/module_utils/facts/hardware/test_sunos_get_uptime_facts.py future-import-boilerplate
test/units/module_utils/facts/hardware/test_sunos_get_uptime_facts.py metaclass-boilerplate
test/units/module_utils/facts/network/test_generic_bsd.py future-import-boilerplate
test/units/module_utils/facts/other/test_facter.py future-import-boilerplate
test/units/module_utils/facts/other/test_ohai.py future-import-boilerplate
test/units/module_utils/facts/system/test_lsb.py future-import-boilerplate
test/units/module_utils/facts/test_ansible_collector.py future-import-boilerplate
test/units/module_utils/facts/test_collector.py future-import-boilerplate
test/units/module_utils/facts/test_collectors.py future-import-boilerplate
test/units/module_utils/facts/test_facts.py future-import-boilerplate
test/units/module_utils/facts/test_timeout.py future-import-boilerplate
test/units/module_utils/facts/test_utils.py future-import-boilerplate
test/units/module_utils/json_utils/test_filter_non_json_lines.py future-import-boilerplate
test/units/module_utils/parsing/test_convert_bool.py future-import-boilerplate
test/units/module_utils/test_distro.py future-import-boilerplate
test/units/module_utils/test_distro.py metaclass-boilerplate
test/units/module_utils/urls/fixtures/multipart.txt line-endings # Fixture for HTTP tests that use CRLF
test/units/module_utils/urls/test_Request.py replace-urlopen
test/units/module_utils/urls/test_fetch_url.py replace-urlopen
test/units/modules/conftest.py future-import-boilerplate
test/units/modules/conftest.py metaclass-boilerplate
test/units/modules/test_copy.py future-import-boilerplate
test/units/modules/test_pip.py future-import-boilerplate
test/units/modules/test_pip.py metaclass-boilerplate
test/units/modules/test_apt.py future-import-boilerplate
test/units/modules/test_apt.py metaclass-boilerplate
test/units/modules/test_apt.py pylint:blacklisted-name
test/units/modules/test_yum.py future-import-boilerplate
test/units/modules/test_yum.py metaclass-boilerplate
test/units/modules/test_iptables.py future-import-boilerplate
test/units/modules/test_iptables.py metaclass-boilerplate
test/units/modules/test_known_hosts.py future-import-boilerplate
test/units/modules/test_known_hosts.py metaclass-boilerplate
test/units/modules/test_known_hosts.py pylint:ansible-bad-function
test/units/modules/test_systemd.py future-import-boilerplate
test/units/modules/test_systemd.py metaclass-boilerplate
test/units/modules/utils.py future-import-boilerplate
test/units/modules/utils.py metaclass-boilerplate
test/units/parsing/utils/test_addresses.py future-import-boilerplate
test/units/parsing/utils/test_addresses.py metaclass-boilerplate
test/units/parsing/vault/test_vault.py pylint:blacklisted-name
test/units/playbook/role/test_role.py pylint:blacklisted-name
test/units/playbook/test_attribute.py future-import-boilerplate
test/units/playbook/test_attribute.py metaclass-boilerplate
test/units/playbook/test_conditional.py future-import-boilerplate
test/units/playbook/test_conditional.py metaclass-boilerplate
test/units/plugins/inventory/test_constructed.py future-import-boilerplate
test/units/plugins/inventory/test_constructed.py metaclass-boilerplate
test/units/plugins/loader_fixtures/import_fixture.py future-import-boilerplate
test/units/plugins/shell/test_cmd.py future-import-boilerplate
test/units/plugins/shell/test_cmd.py metaclass-boilerplate
test/units/plugins/shell/test_powershell.py future-import-boilerplate
test/units/plugins/shell/test_powershell.py metaclass-boilerplate
test/units/plugins/test_plugins.py pylint:blacklisted-name
test/units/template/test_templar.py pylint:blacklisted-name
test/units/test_constants.py future-import-boilerplate
test/units/test_context.py future-import-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/action/my_action.py future-import-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/action/my_action.py metaclass-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/module_utils/my_other_util.py future-import-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/module_utils/my_other_util.py metaclass-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/module_utils/my_util.py future-import-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/module_utils/my_util.py metaclass-boilerplate
test/units/utils/test_cleanup_tmp_file.py future-import-boilerplate
test/units/utils/test_encrypt.py future-import-boilerplate
test/units/utils/test_encrypt.py metaclass-boilerplate
test/units/utils/test_helpers.py future-import-boilerplate
test/units/utils/test_helpers.py metaclass-boilerplate
test/units/utils/test_shlex.py future-import-boilerplate
test/units/utils/test_shlex.py metaclass-boilerplate
test/utils/shippable/check_matrix.py replace-urlopen
test/utils/shippable/timing.py shebang
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,400 |
uri module set string with masked content into content and json output
|
##### SUMMARY
uri module set string with masked content into content and json output
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
uri
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0
config file = None
configured module search path = ['/Users/hungluong/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/hungluong/Library/Python/3.7/lib/python/site-packages/ansible
executable location = /Users/hungluong/Library/Python/3.7/bin/ansible
python version = 3.7.6 (default, Dec 30 2019, 19:38:26) [Clang 11.0.0 (clang-1100.0.33.16)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
N/A
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
connection: local
tasks:
- name: send request
uri:
url: "https://postman-echo.com/get?name=something-with-admin"
user: admin
password: admin
method: GET
force_basic_auth: yes
return_content: yes
status_code: 200
register: response
- name: extract value
vars:
query: args.name
set_fact:
value_content: "{{ response.content }}"
value_content_parsed: "{{ response.content | from_json | json_query(query) }}"
value_json: "{{ response.json.args.name }}"
- name: debug
debug:
msg:
- "{{ 'something-with-admin' in value_json }}"
- "{{ 'something-with-admin' in value_content }}"
- "{{ 'something-with-admin' in value_content_parsed }}"
- "{{ 'something-with-********' in value_json }}"
- "{{ 'something-with-********' in value_content }}"
- "{{ 'something-with-********' in value_content_parsed }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
The module should return the json/content value with the correct values
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The module seems to apply sensitive info masking ('********') to value matching username/password in its output
<!--- Paste verbatim command output between quotes -->
```paste below
"msg": [
false,
false,
false,
true,
false,
true
]
```
|
https://github.com/ansible/ansible/issues/68400
|
https://github.com/ansible/ansible/pull/69653
|
cfd301a586302785fa888117deaf06955a240cdd
|
e0f25a2b1f9e6c21f751ba0ed2dc2eee2152983e
| 2020-03-23T11:01:05Z |
python
| 2020-05-21T20:17:57Z |
changelogs/fragments/68400-strip-no-log-values-from-keys.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,400 |
uri module set string with masked content into content and json output
|
##### SUMMARY
uri module set string with masked content into content and json output
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
uri
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0
config file = None
configured module search path = ['/Users/hungluong/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/hungluong/Library/Python/3.7/lib/python/site-packages/ansible
executable location = /Users/hungluong/Library/Python/3.7/bin/ansible
python version = 3.7.6 (default, Dec 30 2019, 19:38:26) [Clang 11.0.0 (clang-1100.0.33.16)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
N/A
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
connection: local
tasks:
- name: send request
uri:
url: "https://postman-echo.com/get?name=something-with-admin"
user: admin
password: admin
method: GET
force_basic_auth: yes
return_content: yes
status_code: 200
register: response
- name: extract value
vars:
query: args.name
set_fact:
value_content: "{{ response.content }}"
value_content_parsed: "{{ response.content | from_json | json_query(query) }}"
value_json: "{{ response.json.args.name }}"
- name: debug
debug:
msg:
- "{{ 'something-with-admin' in value_json }}"
- "{{ 'something-with-admin' in value_content }}"
- "{{ 'something-with-admin' in value_content_parsed }}"
- "{{ 'something-with-********' in value_json }}"
- "{{ 'something-with-********' in value_content }}"
- "{{ 'something-with-********' in value_content_parsed }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
The module should return the json/content value with the correct values
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The module seems to apply sensitive info masking ('********') to value matching username/password in its output
<!--- Paste verbatim command output between quotes -->
```paste below
"msg": [
false,
false,
false,
true,
false,
true
]
```
|
https://github.com/ansible/ansible/issues/68400
|
https://github.com/ansible/ansible/pull/69653
|
cfd301a586302785fa888117deaf06955a240cdd
|
e0f25a2b1f9e6c21f751ba0ed2dc2eee2152983e
| 2020-03-23T11:01:05Z |
python
| 2020-05-21T20:17:57Z |
lib/ansible/module_utils/basic.py
|
# Copyright (c), Michael DeHaan <[email protected]>, 2012-2013
# Copyright (c), Toshio Kuratomi <[email protected]> 2016
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
FILE_ATTRIBUTES = {
'A': 'noatime',
'a': 'append',
'c': 'compressed',
'C': 'nocow',
'd': 'nodump',
'D': 'dirsync',
'e': 'extents',
'E': 'encrypted',
'h': 'blocksize',
'i': 'immutable',
'I': 'indexed',
'j': 'journalled',
'N': 'inline',
's': 'zero',
'S': 'synchronous',
't': 'notail',
'T': 'blockroot',
'u': 'undelete',
'X': 'compressedraw',
'Z': 'compresseddirty',
}
# Ansible modules can be written in any language.
# The functions available here can be used to do many common tasks,
# to simplify development of Python modules.
import __main__
import atexit
import errno
import datetime
import grp
import fcntl
import locale
import os
import pwd
import platform
import re
import select
import shlex
import shutil
import signal
import stat
import subprocess
import sys
import tempfile
import time
import traceback
import types
from collections import deque
from itertools import chain, repeat
try:
import syslog
HAS_SYSLOG = True
except ImportError:
HAS_SYSLOG = False
try:
from systemd import journal
# Makes sure that systemd.journal has method sendv()
# Double check that journal has method sendv (some packages don't)
has_journal = hasattr(journal, 'sendv')
except ImportError:
has_journal = False
HAVE_SELINUX = False
try:
import selinux
HAVE_SELINUX = True
except ImportError:
pass
# Python2 & 3 way to get NoneType
NoneType = type(None)
from ansible.module_utils.compat import selectors
from ._text import to_native, to_bytes, to_text
from ansible.module_utils.common.text.converters import (
jsonify,
container_to_bytes as json_dict_unicode_to_bytes,
container_to_text as json_dict_bytes_to_unicode,
)
from ansible.module_utils.common.text.formatters import (
lenient_lowercase,
bytes_to_human,
human_to_bytes,
SIZE_RANGES,
)
try:
from ansible.module_utils.common._json_compat import json
except ImportError as e:
print('\n{{"msg": "Error: ansible requires the stdlib json: {0}", "failed": true}}'.format(to_native(e)))
sys.exit(1)
AVAILABLE_HASH_ALGORITHMS = dict()
try:
import hashlib
# python 2.7.9+ and 2.7.0+
for attribute in ('available_algorithms', 'algorithms'):
algorithms = getattr(hashlib, attribute, None)
if algorithms:
break
if algorithms is None:
# python 2.5+
algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512')
for algorithm in algorithms:
AVAILABLE_HASH_ALGORITHMS[algorithm] = getattr(hashlib, algorithm)
# we may have been able to import md5 but it could still not be available
try:
hashlib.md5()
except ValueError:
AVAILABLE_HASH_ALGORITHMS.pop('md5', None)
except Exception:
import sha
AVAILABLE_HASH_ALGORITHMS = {'sha1': sha.sha}
try:
import md5
AVAILABLE_HASH_ALGORITHMS['md5'] = md5.md5
except Exception:
pass
from ansible.module_utils.common._collections_compat import (
KeysView,
Mapping, MutableMapping,
Sequence, MutableSequence,
Set, MutableSet,
)
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.file import (
_PERM_BITS as PERM_BITS,
_EXEC_PERM_BITS as EXEC_PERM_BITS,
_DEFAULT_PERM as DEFAULT_PERM,
is_executable,
format_attributes,
get_flags_from_attributes,
)
from ansible.module_utils.common.sys_info import (
get_distribution,
get_distribution_version,
get_platform_subclass,
)
from ansible.module_utils.pycompat24 import get_exception, literal_eval
from ansible.module_utils.common.parameters import (
handle_aliases,
list_deprecations,
list_no_log_values,
PASS_VARS,
PASS_BOOLS,
)
from ansible.module_utils.six import (
PY2,
PY3,
b,
binary_type,
integer_types,
iteritems,
string_types,
text_type,
)
from ansible.module_utils.six.moves import map, reduce, shlex_quote
from ansible.module_utils.common.validation import (
check_missing_parameters,
check_mutually_exclusive,
check_required_arguments,
check_required_by,
check_required_if,
check_required_one_of,
check_required_together,
count_terms,
check_type_bool,
check_type_bits,
check_type_bytes,
check_type_float,
check_type_int,
check_type_jsonarg,
check_type_list,
check_type_dict,
check_type_path,
check_type_raw,
check_type_str,
safe_eval,
)
from ansible.module_utils.common._utils import get_all_subclasses as _get_all_subclasses
from ansible.module_utils.parsing.convert_bool import BOOLEANS, BOOLEANS_FALSE, BOOLEANS_TRUE, boolean
from ansible.module_utils.common.warnings import (
deprecate,
get_deprecation_messages,
get_warning_messages,
warn,
)
# Note: When getting Sequence from collections, it matches with strings. If
# this matters, make sure to check for strings before checking for sequencetype
SEQUENCETYPE = frozenset, KeysView, Sequence
PASSWORD_MATCH = re.compile(r'^(?:.+[-_\s])?pass(?:[-_\s]?(?:word|phrase|wrd|wd)?)(?:[-_\s].+)?$', re.I)
imap = map
try:
# Python 2
unicode
except NameError:
# Python 3
unicode = text_type
try:
# Python 2
basestring
except NameError:
# Python 3
basestring = string_types
_literal_eval = literal_eval
# End of deprecated names
# Internal global holding passed in params. This is consulted in case
# multiple AnsibleModules are created. Otherwise each AnsibleModule would
# attempt to read from stdin. Other code should not use this directly as it
# is an internal implementation detail
_ANSIBLE_ARGS = None
FILE_COMMON_ARGUMENTS = dict(
# These are things we want. About setting metadata (mode, ownership, permissions in general) on
# created files (these are used by set_fs_attributes_if_different and included in
# load_file_common_arguments)
mode=dict(type='raw'),
owner=dict(type='str'),
group=dict(type='str'),
seuser=dict(type='str'),
serole=dict(type='str'),
selevel=dict(type='str'),
setype=dict(type='str'),
attributes=dict(type='str', aliases=['attr']),
unsafe_writes=dict(type='bool', default=False), # should be available to any module using atomic_move
)
PASSWD_ARG_RE = re.compile(r'^[-]{0,2}pass[-]?(word|wd)?')
# Used for parsing symbolic file perms
MODE_OPERATOR_RE = re.compile(r'[+=-]')
USERS_RE = re.compile(r'[^ugo]')
PERMS_RE = re.compile(r'[^rwxXstugo]')
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
_PY3_MIN = sys.version_info[:2] >= (3, 5)
_PY2_MIN = (2, 6) <= sys.version_info[:2] < (3,)
_PY_MIN = _PY3_MIN or _PY2_MIN
if not _PY_MIN:
print(
'\n{"failed": true, '
'"msg": "Ansible requires a minimum of Python2 version 2.6 or Python3 version 3.5. Current version: %s"}' % ''.join(sys.version.splitlines())
)
sys.exit(1)
#
# Deprecated functions
#
def get_platform():
'''
**Deprecated** Use :py:func:`platform.system` directly.
:returns: Name of the platform the module is running on in a native string
Returns a native string that labels the platform ("Linux", "Solaris", etc). Currently, this is
the result of calling :py:func:`platform.system`.
'''
return platform.system()
# End deprecated functions
#
# Compat shims
#
def load_platform_subclass(cls, *args, **kwargs):
"""**Deprecated**: Use ansible.module_utils.common.sys_info.get_platform_subclass instead"""
platform_cls = get_platform_subclass(cls)
return super(cls, platform_cls).__new__(platform_cls)
def get_all_subclasses(cls):
"""**Deprecated**: Use ansible.module_utils.common._utils.get_all_subclasses instead"""
return list(_get_all_subclasses(cls))
# End compat shims
def _remove_values_conditions(value, no_log_strings, deferred_removals):
"""
Helper function for :meth:`remove_values`.
:arg value: The value to check for strings that need to be stripped
:arg no_log_strings: set of strings which must be stripped out of any values
:arg deferred_removals: List which holds information about nested
containers that have to be iterated for removals. It is passed into
this function so that more entries can be added to it if value is
a container type. The format of each entry is a 2-tuple where the first
element is the ``value`` parameter and the second value is a new
container to copy the elements of ``value`` into once iterated.
:returns: if ``value`` is a scalar, returns ``value`` with two exceptions:
1. :class:`~datetime.datetime` objects which are changed into a string representation.
2. objects which are in no_log_strings are replaced with a placeholder
so that no sensitive data is leaked.
If ``value`` is a container type, returns a new empty container.
``deferred_removals`` is added to as a side-effect of this function.
.. warning:: It is up to the caller to make sure the order in which value
is passed in is correct. For instance, higher level containers need
to be passed in before lower level containers. For example, given
``{'level1': {'level2': 'level3': [True]} }`` first pass in the
dictionary for ``level1``, then the dict for ``level2``, and finally
the list for ``level3``.
"""
if isinstance(value, (text_type, binary_type)):
# Need native str type
native_str_value = value
if isinstance(value, text_type):
value_is_text = True
if PY2:
native_str_value = to_bytes(value, errors='surrogate_or_strict')
elif isinstance(value, binary_type):
value_is_text = False
if PY3:
native_str_value = to_text(value, errors='surrogate_or_strict')
if native_str_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
native_str_value = native_str_value.replace(omit_me, '*' * 8)
if value_is_text and isinstance(native_str_value, binary_type):
value = to_text(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
elif not value_is_text and isinstance(native_str_value, text_type):
value = to_bytes(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
else:
value = native_str_value
elif isinstance(value, Sequence):
if isinstance(value, MutableSequence):
new_value = type(value)()
else:
new_value = [] # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Set):
if isinstance(value, MutableSet):
new_value = type(value)()
else:
new_value = set() # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Mapping):
if isinstance(value, MutableMapping):
new_value = type(value)()
else:
new_value = {} # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))):
stringy_value = to_native(value, encoding='utf-8', errors='surrogate_or_strict')
if stringy_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
if omit_me in stringy_value:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
elif isinstance(value, datetime.datetime):
value = value.isoformat()
else:
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
return value
def remove_values(value, no_log_strings):
""" Remove strings in no_log_strings from value. If value is a container
type, then remove a lot more"""
deferred_removals = deque()
no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings]
new_value = _remove_values_conditions(value, no_log_strings, deferred_removals)
while deferred_removals:
old_data, new_data = deferred_removals.popleft()
if isinstance(new_data, Mapping):
for old_key, old_elem in old_data.items():
new_elem = _remove_values_conditions(old_elem, no_log_strings, deferred_removals)
new_data[old_key] = new_elem
else:
for elem in old_data:
new_elem = _remove_values_conditions(elem, no_log_strings, deferred_removals)
if isinstance(new_data, MutableSequence):
new_data.append(new_elem)
elif isinstance(new_data, MutableSet):
new_data.add(new_elem)
else:
raise TypeError('Unknown container type encountered when removing private values from output')
return new_value
def heuristic_log_sanitize(data, no_log_values=None):
''' Remove strings that look like passwords from log messages '''
# Currently filters:
# user:pass@foo/whatever and http://username:pass@wherever/foo
# This code has false positives and consumes parts of logs that are
# not passwds
# begin: start of a passwd containing string
# end: end of a passwd containing string
# sep: char between user and passwd
# prev_begin: where in the overall string to start a search for
# a passwd
# sep_search_end: where in the string to end a search for the sep
data = to_native(data)
output = []
begin = len(data)
prev_begin = begin
sep = 1
while sep:
# Find the potential end of a passwd
try:
end = data.rindex('@', 0, begin)
except ValueError:
# No passwd in the rest of the data
output.insert(0, data[0:begin])
break
# Search for the beginning of a passwd
sep = None
sep_search_end = end
while not sep:
# URL-style username+password
try:
begin = data.rindex('://', 0, sep_search_end)
except ValueError:
# No url style in the data, check for ssh style in the
# rest of the string
begin = 0
# Search for separator
try:
sep = data.index(':', begin + 3, end)
except ValueError:
# No separator; choices:
if begin == 0:
# Searched the whole string so there's no password
# here. Return the remaining data
output.insert(0, data[0:begin])
break
# Search for a different beginning of the password field.
sep_search_end = begin
continue
if sep:
# Password was found; remove it.
output.insert(0, data[end:prev_begin])
output.insert(0, '********')
output.insert(0, data[begin:sep + 1])
prev_begin = begin
output = ''.join(output)
if no_log_values:
output = remove_values(output, no_log_values)
return output
def _load_params():
''' read the modules parameters and store them globally.
This function may be needed for certain very dynamic custom modules which
want to process the parameters that are being handed the module. Since
this is so closely tied to the implementation of modules we cannot
guarantee API stability for it (it may change between versions) however we
will try not to break it gratuitously. It is certainly more future-proof
to call this function and consume its outputs than to implement the logic
inside it as a copy in your own code.
'''
global _ANSIBLE_ARGS
if _ANSIBLE_ARGS is not None:
buffer = _ANSIBLE_ARGS
else:
# debug overrides to read args from file or cmdline
# Avoid tracebacks when locale is non-utf8
# We control the args and we pass them as utf8
if len(sys.argv) > 1:
if os.path.isfile(sys.argv[1]):
fd = open(sys.argv[1], 'rb')
buffer = fd.read()
fd.close()
else:
buffer = sys.argv[1]
if PY3:
buffer = buffer.encode('utf-8', errors='surrogateescape')
# default case, read from stdin
else:
if PY2:
buffer = sys.stdin.read()
else:
buffer = sys.stdin.buffer.read()
_ANSIBLE_ARGS = buffer
try:
params = json.loads(buffer.decode('utf-8'))
except ValueError:
# This helper used too early for fail_json to work.
print('\n{"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed", "failed": true}')
sys.exit(1)
if PY2:
params = json_dict_unicode_to_bytes(params)
try:
return params['ANSIBLE_MODULE_ARGS']
except KeyError:
# This helper does not have access to fail_json so we have to print
# json output on our own.
print('\n{"msg": "Error: Module unable to locate ANSIBLE_MODULE_ARGS in json data from stdin. Unable to figure out what parameters were passed", '
'"failed": true}')
sys.exit(1)
def env_fallback(*args, **kwargs):
''' Load value from environment '''
for arg in args:
if arg in os.environ:
return os.environ[arg]
raise AnsibleFallbackNotFound
def missing_required_lib(library, reason=None, url=None):
hostname = platform.node()
msg = "Failed to import the required Python library (%s) on %s's Python %s." % (library, hostname, sys.executable)
if reason:
msg += " This is required %s." % reason
if url:
msg += " See %s for more info." % url
msg += (" Please read the module documentation and install it in the appropriate location."
" If the required library is installed, but Ansible is using the wrong Python interpreter,"
" please consult the documentation on ansible_python_interpreter")
return msg
class AnsibleFallbackNotFound(Exception):
pass
class AnsibleModule(object):
def __init__(self, argument_spec, bypass_checks=False, no_log=False,
mutually_exclusive=None, required_together=None,
required_one_of=None, add_file_common_args=False,
supports_check_mode=False, required_if=None, required_by=None):
'''
Common code for quickly building an ansible module in Python
(although you can write modules with anything that can return JSON).
See :ref:`developing_modules_general` for a general introduction
and :ref:`developing_program_flow_modules` for more detailed explanation.
'''
self._name = os.path.basename(__file__) # initialize name until we can parse from options
self.argument_spec = argument_spec
self.supports_check_mode = supports_check_mode
self.check_mode = False
self.bypass_checks = bypass_checks
self.no_log = no_log
self.mutually_exclusive = mutually_exclusive
self.required_together = required_together
self.required_one_of = required_one_of
self.required_if = required_if
self.required_by = required_by
self.cleanup_files = []
self._debug = False
self._diff = False
self._socket_path = None
self._shell = None
self._verbosity = 0
# May be used to set modifications to the environment for any
# run_command invocation
self.run_command_environ_update = {}
self._clean = {}
self._string_conversion_action = ''
self.aliases = {}
self._legal_inputs = []
self._options_context = list()
self._tmpdir = None
if add_file_common_args:
for k, v in FILE_COMMON_ARGUMENTS.items():
if k not in self.argument_spec:
self.argument_spec[k] = v
self._load_params()
self._set_fallbacks()
# append to legal_inputs and then possibly check against them
try:
self.aliases = self._handle_aliases()
except (ValueError, TypeError) as e:
# Use exceptions here because it isn't safe to call fail_json until no_log is processed
print('\n{"failed": true, "msg": "Module alias error: %s"}' % to_native(e))
sys.exit(1)
# Save parameter values that should never be logged
self.no_log_values = set()
self._handle_no_log_values()
# check the locale as set by the current environment, and reset to
# a known valid (LANG=C) if it's an invalid/unavailable locale
self._check_locale()
self._check_arguments()
# check exclusive early
if not bypass_checks:
self._check_mutually_exclusive(mutually_exclusive)
self._set_defaults(pre=True)
self._CHECK_ARGUMENT_TYPES_DISPATCHER = {
'str': self._check_type_str,
'list': self._check_type_list,
'dict': self._check_type_dict,
'bool': self._check_type_bool,
'int': self._check_type_int,
'float': self._check_type_float,
'path': self._check_type_path,
'raw': self._check_type_raw,
'jsonarg': self._check_type_jsonarg,
'json': self._check_type_jsonarg,
'bytes': self._check_type_bytes,
'bits': self._check_type_bits,
}
if not bypass_checks:
self._check_required_arguments()
self._check_argument_types()
self._check_argument_values()
self._check_required_together(required_together)
self._check_required_one_of(required_one_of)
self._check_required_if(required_if)
self._check_required_by(required_by)
self._set_defaults(pre=False)
# deal with options sub-spec
self._handle_options()
if not self.no_log:
self._log_invocation()
# finally, make sure we're in a sane working dir
self._set_cwd()
@property
def tmpdir(self):
# if _ansible_tmpdir was not set and we have a remote_tmp,
# the module needs to create it and clean it up once finished.
# otherwise we create our own module tmp dir from the system defaults
if self._tmpdir is None:
basedir = None
if self._remote_tmp is not None:
basedir = os.path.expanduser(os.path.expandvars(self._remote_tmp))
if basedir is not None and not os.path.exists(basedir):
try:
os.makedirs(basedir, mode=0o700)
except (OSError, IOError) as e:
self.warn("Unable to use %s as temporary directory, "
"failing back to system: %s" % (basedir, to_native(e)))
basedir = None
else:
self.warn("Module remote_tmp %s did not exist and was "
"created with a mode of 0700, this may cause"
" issues when running as another user. To "
"avoid this, create the remote_tmp dir with "
"the correct permissions manually" % basedir)
basefile = "ansible-moduletmp-%s-" % time.time()
try:
tmpdir = tempfile.mkdtemp(prefix=basefile, dir=basedir)
except (OSError, IOError) as e:
self.fail_json(
msg="Failed to create remote module tmp path at dir %s "
"with prefix %s: %s" % (basedir, basefile, to_native(e))
)
if not self._keep_remote_files:
atexit.register(shutil.rmtree, tmpdir)
self._tmpdir = tmpdir
return self._tmpdir
def warn(self, warning):
warn(warning)
self.log('[WARNING] %s' % warning)
def deprecate(self, msg, version=None):
deprecate(msg, version)
self.log('[DEPRECATION WARNING] %s %s' % (msg, version))
def load_file_common_arguments(self, params, path=None):
'''
many modules deal with files, this encapsulates common
options that the file module accepts such that it is directly
available to all modules and they can share code.
Allows to overwrite the path/dest module argument by providing path.
'''
if path is None:
path = params.get('path', params.get('dest', None))
if path is None:
return {}
else:
path = os.path.expanduser(os.path.expandvars(path))
b_path = to_bytes(path, errors='surrogate_or_strict')
# if the path is a symlink, and we're following links, get
# the target of the link instead for testing
if params.get('follow', False) and os.path.islink(b_path):
b_path = os.path.realpath(b_path)
path = to_native(b_path)
mode = params.get('mode', None)
owner = params.get('owner', None)
group = params.get('group', None)
# selinux related options
seuser = params.get('seuser', None)
serole = params.get('serole', None)
setype = params.get('setype', None)
selevel = params.get('selevel', None)
secontext = [seuser, serole, setype]
if self.selinux_mls_enabled():
secontext.append(selevel)
default_secontext = self.selinux_default_context(path)
for i in range(len(default_secontext)):
if i is not None and secontext[i] == '_default':
secontext[i] = default_secontext[i]
attributes = params.get('attributes', None)
return dict(
path=path, mode=mode, owner=owner, group=group,
seuser=seuser, serole=serole, setype=setype,
selevel=selevel, secontext=secontext, attributes=attributes,
)
# Detect whether using selinux that is MLS-aware.
# While this means you can set the level/range with
# selinux.lsetfilecon(), it may or may not mean that you
# will get the selevel as part of the context returned
# by selinux.lgetfilecon().
def selinux_mls_enabled(self):
if not HAVE_SELINUX:
return False
if selinux.is_selinux_mls_enabled() == 1:
return True
else:
return False
def selinux_enabled(self):
if not HAVE_SELINUX:
seenabled = self.get_bin_path('selinuxenabled')
if seenabled is not None:
(rc, out, err) = self.run_command(seenabled)
if rc == 0:
self.fail_json(msg="Aborting, target uses selinux but python bindings (libselinux-python) aren't installed!")
return False
if selinux.is_selinux_enabled() == 1:
return True
else:
return False
# Determine whether we need a placeholder for selevel/mls
def selinux_initial_context(self):
context = [None, None, None]
if self.selinux_mls_enabled():
context.append(None)
return context
# If selinux fails to find a default, return an array of None
def selinux_default_context(self, path, mode=0):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.matchpathcon(to_native(path, errors='surrogate_or_strict'), mode)
except OSError:
return context
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def selinux_context(self, path):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.lgetfilecon_raw(to_native(path, errors='surrogate_or_strict'))
except OSError as e:
if e.errno == errno.ENOENT:
self.fail_json(path=path, msg='path %s does not exist' % path)
else:
self.fail_json(path=path, msg='failed to retrieve selinux context')
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def user_and_group(self, path, expand=True):
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
st = os.lstat(b_path)
uid = st.st_uid
gid = st.st_gid
return (uid, gid)
def find_mount_point(self, path):
path_is_bytes = False
if isinstance(path, binary_type):
path_is_bytes = True
b_path = os.path.realpath(to_bytes(os.path.expanduser(os.path.expandvars(path)), errors='surrogate_or_strict'))
while not os.path.ismount(b_path):
b_path = os.path.dirname(b_path)
if path_is_bytes:
return b_path
return to_text(b_path, errors='surrogate_or_strict')
def is_special_selinux_path(self, path):
"""
Returns a tuple containing (True, selinux_context) if the given path is on a
NFS or other 'special' fs mount point, otherwise the return will be (False, None).
"""
try:
f = open('/proc/mounts', 'r')
mount_data = f.readlines()
f.close()
except Exception:
return (False, None)
path_mount_point = self.find_mount_point(path)
for line in mount_data:
(device, mount_point, fstype, options, rest) = line.split(' ', 4)
if path_mount_point == mount_point:
for fs in self._selinux_special_fs:
if fs in fstype:
special_context = self.selinux_context(path_mount_point)
return (True, special_context)
return (False, None)
def set_default_selinux_context(self, path, changed):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
context = self.selinux_default_context(path)
return self.set_context_if_different(path, context, False)
def set_context_if_different(self, path, context, changed, diff=None):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
if self.check_file_absent_if_check_mode(path):
return True
cur_context = self.selinux_context(path)
new_context = list(cur_context)
# Iterate over the current context instead of the
# argument context, which may have selevel.
(is_special_se, sp_context) = self.is_special_selinux_path(path)
if is_special_se:
new_context = sp_context
else:
for i in range(len(cur_context)):
if len(context) > i:
if context[i] is not None and context[i] != cur_context[i]:
new_context[i] = context[i]
elif context[i] is None:
new_context[i] = cur_context[i]
if cur_context != new_context:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['secontext'] = cur_context
if 'after' not in diff:
diff['after'] = {}
diff['after']['secontext'] = new_context
try:
if self.check_mode:
return True
rc = selinux.lsetfilecon(to_native(path), ':'.join(new_context))
except OSError as e:
self.fail_json(path=path, msg='invalid selinux context: %s' % to_native(e),
new_context=new_context, cur_context=cur_context, input_was=context)
if rc != 0:
self.fail_json(path=path, msg='set selinux context failed')
changed = True
return changed
def set_owner_if_different(self, path, owner, changed, diff=None, expand=True):
if owner is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
uid = int(owner)
except ValueError:
try:
uid = pwd.getpwnam(owner).pw_uid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: failed to look up user %s' % owner)
if orig_uid != uid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['owner'] = orig_uid
if 'after' not in diff:
diff['after'] = {}
diff['after']['owner'] = uid
if self.check_mode:
return True
try:
os.lchown(b_path, uid, -1)
except (IOError, OSError) as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: %s' % (to_text(e)))
changed = True
return changed
def set_group_if_different(self, path, group, changed, diff=None, expand=True):
if group is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
gid = int(group)
except ValueError:
try:
gid = grp.getgrnam(group).gr_gid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed: failed to look up group %s' % group)
if orig_gid != gid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['group'] = orig_gid
if 'after' not in diff:
diff['after'] = {}
diff['after']['group'] = gid
if self.check_mode:
return True
try:
os.lchown(b_path, -1, gid)
except OSError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed')
changed = True
return changed
def set_mode_if_different(self, path, mode, changed, diff=None, expand=True):
if mode is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
path_stat = os.lstat(b_path)
if self.check_file_absent_if_check_mode(b_path):
return True
if not isinstance(mode, int):
try:
mode = int(mode, 8)
except Exception:
try:
mode = self._symbolic_mode_to_octal(path_stat, mode)
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path,
msg="mode must be in octal or symbolic form",
details=to_native(e))
if mode != stat.S_IMODE(mode):
# prevent mode from having extra info orbeing invalid long number
path = to_text(b_path)
self.fail_json(path=path, msg="Invalid mode supplied, only permission info is allowed", details=mode)
prev_mode = stat.S_IMODE(path_stat.st_mode)
if prev_mode != mode:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['mode'] = '0%03o' % prev_mode
if 'after' not in diff:
diff['after'] = {}
diff['after']['mode'] = '0%03o' % mode
if self.check_mode:
return True
# FIXME: comparison against string above will cause this to be executed
# every time
try:
if hasattr(os, 'lchmod'):
os.lchmod(b_path, mode)
else:
if not os.path.islink(b_path):
os.chmod(b_path, mode)
else:
# Attempt to set the perms of the symlink but be
# careful not to change the perms of the underlying
# file while trying
underlying_stat = os.stat(b_path)
os.chmod(b_path, mode)
new_underlying_stat = os.stat(b_path)
if underlying_stat.st_mode != new_underlying_stat.st_mode:
os.chmod(b_path, stat.S_IMODE(underlying_stat.st_mode))
except OSError as e:
if os.path.islink(b_path) and e.errno in (errno.EPERM, errno.EROFS): # Can't set mode on symbolic links
pass
elif e.errno in (errno.ENOENT, errno.ELOOP): # Can't set mode on broken symbolic links
pass
else:
raise
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chmod failed', details=to_native(e),
exception=traceback.format_exc())
path_stat = os.lstat(b_path)
new_mode = stat.S_IMODE(path_stat.st_mode)
if new_mode != prev_mode:
changed = True
return changed
def set_attributes_if_different(self, path, attributes, changed, diff=None, expand=True):
if attributes is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
existing = self.get_file_attributes(b_path)
attr_mod = '='
if attributes.startswith(('-', '+')):
attr_mod = attributes[0]
attributes = attributes[1:]
if existing.get('attr_flags', '') != attributes or attr_mod == '-':
attrcmd = self.get_bin_path('chattr')
if attrcmd:
attrcmd = [attrcmd, '%s%s' % (attr_mod, attributes), b_path]
changed = True
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['attributes'] = existing.get('attr_flags')
if 'after' not in diff:
diff['after'] = {}
diff['after']['attributes'] = '%s%s' % (attr_mod, attributes)
if not self.check_mode:
try:
rc, out, err = self.run_command(attrcmd)
if rc != 0 or err:
raise Exception("Error while setting attributes: %s" % (out + err))
except Exception as e:
self.fail_json(path=to_text(b_path), msg='chattr failed',
details=to_native(e), exception=traceback.format_exc())
return changed
def get_file_attributes(self, path):
output = {}
attrcmd = self.get_bin_path('lsattr', False)
if attrcmd:
attrcmd = [attrcmd, '-vd', path]
try:
rc, out, err = self.run_command(attrcmd)
if rc == 0:
res = out.split()
output['attr_flags'] = res[1].replace('-', '').strip()
output['version'] = res[0].strip()
output['attributes'] = format_attributes(output['attr_flags'])
except Exception:
pass
return output
@classmethod
def _symbolic_mode_to_octal(cls, path_stat, symbolic_mode):
"""
This enables symbolic chmod string parsing as stated in the chmod man-page
This includes things like: "u=rw-x+X,g=r-x+X,o=r-x+X"
"""
new_mode = stat.S_IMODE(path_stat.st_mode)
# Now parse all symbolic modes
for mode in symbolic_mode.split(','):
# Per single mode. This always contains a '+', '-' or '='
# Split it on that
permlist = MODE_OPERATOR_RE.split(mode)
# And find all the operators
opers = MODE_OPERATOR_RE.findall(mode)
# The user(s) where it's all about is the first element in the
# 'permlist' list. Take that and remove it from the list.
# An empty user or 'a' means 'all'.
users = permlist.pop(0)
use_umask = (users == '')
if users == 'a' or users == '':
users = 'ugo'
# Check if there are illegal characters in the user list
# They can end up in 'users' because they are not split
if USERS_RE.match(users):
raise ValueError("bad symbolic permission for mode: %s" % mode)
# Now we have two list of equal length, one contains the requested
# permissions and one with the corresponding operators.
for idx, perms in enumerate(permlist):
# Check if there are illegal characters in the permissions
if PERMS_RE.match(perms):
raise ValueError("bad symbolic permission for mode: %s" % mode)
for user in users:
mode_to_apply = cls._get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask)
new_mode = cls._apply_operation_to_mode(user, opers[idx], mode_to_apply, new_mode)
return new_mode
@staticmethod
def _apply_operation_to_mode(user, operator, mode_to_apply, current_mode):
if operator == '=':
if user == 'u':
mask = stat.S_IRWXU | stat.S_ISUID
elif user == 'g':
mask = stat.S_IRWXG | stat.S_ISGID
elif user == 'o':
mask = stat.S_IRWXO | stat.S_ISVTX
# mask out u, g, or o permissions from current_mode and apply new permissions
inverse_mask = mask ^ PERM_BITS
new_mode = (current_mode & inverse_mask) | mode_to_apply
elif operator == '+':
new_mode = current_mode | mode_to_apply
elif operator == '-':
new_mode = current_mode - (current_mode & mode_to_apply)
return new_mode
@staticmethod
def _get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask):
prev_mode = stat.S_IMODE(path_stat.st_mode)
is_directory = stat.S_ISDIR(path_stat.st_mode)
has_x_permissions = (prev_mode & EXEC_PERM_BITS) > 0
apply_X_permission = is_directory or has_x_permissions
# Get the umask, if the 'user' part is empty, the effect is as if (a) were
# given, but bits that are set in the umask are not affected.
# We also need the "reversed umask" for masking
umask = os.umask(0)
os.umask(umask)
rev_umask = umask ^ PERM_BITS
# Permission bits constants documented at:
# http://docs.python.org/2/library/stat.html#stat.S_ISUID
if apply_X_permission:
X_perms = {
'u': {'X': stat.S_IXUSR},
'g': {'X': stat.S_IXGRP},
'o': {'X': stat.S_IXOTH},
}
else:
X_perms = {
'u': {'X': 0},
'g': {'X': 0},
'o': {'X': 0},
}
user_perms_to_modes = {
'u': {
'r': rev_umask & stat.S_IRUSR if use_umask else stat.S_IRUSR,
'w': rev_umask & stat.S_IWUSR if use_umask else stat.S_IWUSR,
'x': rev_umask & stat.S_IXUSR if use_umask else stat.S_IXUSR,
's': stat.S_ISUID,
't': 0,
'u': prev_mode & stat.S_IRWXU,
'g': (prev_mode & stat.S_IRWXG) << 3,
'o': (prev_mode & stat.S_IRWXO) << 6},
'g': {
'r': rev_umask & stat.S_IRGRP if use_umask else stat.S_IRGRP,
'w': rev_umask & stat.S_IWGRP if use_umask else stat.S_IWGRP,
'x': rev_umask & stat.S_IXGRP if use_umask else stat.S_IXGRP,
's': stat.S_ISGID,
't': 0,
'u': (prev_mode & stat.S_IRWXU) >> 3,
'g': prev_mode & stat.S_IRWXG,
'o': (prev_mode & stat.S_IRWXO) << 3},
'o': {
'r': rev_umask & stat.S_IROTH if use_umask else stat.S_IROTH,
'w': rev_umask & stat.S_IWOTH if use_umask else stat.S_IWOTH,
'x': rev_umask & stat.S_IXOTH if use_umask else stat.S_IXOTH,
's': 0,
't': stat.S_ISVTX,
'u': (prev_mode & stat.S_IRWXU) >> 6,
'g': (prev_mode & stat.S_IRWXG) >> 3,
'o': prev_mode & stat.S_IRWXO},
}
# Insert X_perms into user_perms_to_modes
for key, value in X_perms.items():
user_perms_to_modes[key].update(value)
def or_reduce(mode, perm):
return mode | user_perms_to_modes[user][perm]
return reduce(or_reduce, perms, 0)
def set_fs_attributes_if_different(self, file_args, changed, diff=None, expand=True):
# set modes owners and context as needed
changed = self.set_context_if_different(
file_args['path'], file_args['secontext'], changed, diff
)
changed = self.set_owner_if_different(
file_args['path'], file_args['owner'], changed, diff, expand
)
changed = self.set_group_if_different(
file_args['path'], file_args['group'], changed, diff, expand
)
changed = self.set_mode_if_different(
file_args['path'], file_args['mode'], changed, diff, expand
)
changed = self.set_attributes_if_different(
file_args['path'], file_args['attributes'], changed, diff, expand
)
return changed
def check_file_absent_if_check_mode(self, file_path):
return self.check_mode and not os.path.exists(file_path)
def set_directory_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def set_file_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def add_path_info(self, kwargs):
'''
for results that are files, supplement the info about the file
in the return path with stats about the file path.
'''
path = kwargs.get('path', kwargs.get('dest', None))
if path is None:
return kwargs
b_path = to_bytes(path, errors='surrogate_or_strict')
if os.path.exists(b_path):
(uid, gid) = self.user_and_group(path)
kwargs['uid'] = uid
kwargs['gid'] = gid
try:
user = pwd.getpwuid(uid)[0]
except KeyError:
user = str(uid)
try:
group = grp.getgrgid(gid)[0]
except KeyError:
group = str(gid)
kwargs['owner'] = user
kwargs['group'] = group
st = os.lstat(b_path)
kwargs['mode'] = '0%03o' % stat.S_IMODE(st[stat.ST_MODE])
# secontext not yet supported
if os.path.islink(b_path):
kwargs['state'] = 'link'
elif os.path.isdir(b_path):
kwargs['state'] = 'directory'
elif os.stat(b_path).st_nlink > 1:
kwargs['state'] = 'hard'
else:
kwargs['state'] = 'file'
if HAVE_SELINUX and self.selinux_enabled():
kwargs['secontext'] = ':'.join(self.selinux_context(path))
kwargs['size'] = st[stat.ST_SIZE]
return kwargs
def _check_locale(self):
'''
Uses the locale module to test the currently set locale
(per the LANG and LC_CTYPE environment settings)
'''
try:
# setting the locale to '' uses the default locale
# as it would be returned by locale.getdefaultlocale()
locale.setlocale(locale.LC_ALL, '')
except locale.Error:
# fallback to the 'C' locale, which may cause unicode
# issues but is preferable to simply failing because
# of an unknown locale
locale.setlocale(locale.LC_ALL, 'C')
os.environ['LANG'] = 'C'
os.environ['LC_ALL'] = 'C'
os.environ['LC_MESSAGES'] = 'C'
except Exception as e:
self.fail_json(msg="An unknown error was encountered while attempting to validate the locale: %s" %
to_native(e), exception=traceback.format_exc())
def _handle_aliases(self, spec=None, param=None, option_prefix=''):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
# this uses exceptions as it happens before we can safely call fail_json
alias_warnings = []
alias_results, self._legal_inputs = handle_aliases(spec, param, alias_warnings=alias_warnings)
for option, alias in alias_warnings:
warn('Both option %s and its alias %s are set.' % (option_prefix + option, option_prefix + alias))
deprecated_aliases = []
for i in spec.keys():
if 'deprecated_aliases' in spec[i].keys():
for alias in spec[i]['deprecated_aliases']:
deprecated_aliases.append(alias)
for deprecation in deprecated_aliases:
if deprecation['name'] in param.keys():
deprecate("Alias '%s' is deprecated. See the module docs for more information" % deprecation['name'], deprecation['version'])
return alias_results
def _handle_no_log_values(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
try:
self.no_log_values.update(list_no_log_values(spec, param))
except TypeError as te:
self.fail_json(msg="Failure when processing no_log parameters. Module invocation will be hidden. "
"%s" % to_native(te), invocation={'module_args': 'HIDDEN DUE TO FAILURE'})
for message in list_deprecations(spec, param):
deprecate(message['msg'], message['version'])
def _check_arguments(self, spec=None, param=None, legal_inputs=None):
self._syslog_facility = 'LOG_USER'
unsupported_parameters = set()
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
if legal_inputs is None:
legal_inputs = self._legal_inputs
for k in list(param.keys()):
if k not in legal_inputs:
unsupported_parameters.add(k)
for k in PASS_VARS:
# handle setting internal properties from internal ansible vars
param_key = '_ansible_%s' % k
if param_key in param:
if k in PASS_BOOLS:
setattr(self, PASS_VARS[k][0], self.boolean(param[param_key]))
else:
setattr(self, PASS_VARS[k][0], param[param_key])
# clean up internal top level params:
if param_key in self.params:
del self.params[param_key]
else:
# use defaults if not already set
if not hasattr(self, PASS_VARS[k][0]):
setattr(self, PASS_VARS[k][0], PASS_VARS[k][1])
if unsupported_parameters:
msg = "Unsupported parameters for (%s) module: %s" % (self._name, ', '.join(sorted(list(unsupported_parameters))))
if self._options_context:
msg += " found in %s." % " -> ".join(self._options_context)
msg += " Supported parameters include: %s" % (', '.join(sorted(spec.keys())))
self.fail_json(msg=msg)
if self.check_mode and not self.supports_check_mode:
self.exit_json(skipped=True, msg="remote module (%s) does not support check mode" % self._name)
def _count_terms(self, check, param=None):
if param is None:
param = self.params
return count_terms(check, param)
def _check_mutually_exclusive(self, spec, param=None):
if param is None:
param = self.params
try:
check_mutually_exclusive(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_one_of(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_one_of(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_together(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_together(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_by(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_by(spec, param)
except TypeError as e:
self.fail_json(msg=to_native(e))
def _check_required_arguments(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
try:
check_required_arguments(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_if(self, spec, param=None):
''' ensure that parameters which conditionally required are present '''
if spec is None:
return
if param is None:
param = self.params
try:
check_required_if(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_argument_values(self, spec=None, param=None):
''' ensure all arguments have the requested values, and there are no stray arguments '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
choices = v.get('choices', None)
if choices is None:
continue
if isinstance(choices, SEQUENCETYPE) and not isinstance(choices, (binary_type, text_type)):
if k in param:
# Allow one or more when type='list' param with choices
if isinstance(param[k], list):
diff_list = ", ".join([item for item in param[k] if item not in choices])
if diff_list:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one or more of: %s. Got no match for: %s" % (k, choices_str, diff_list)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
elif param[k] not in choices:
# PyYaml converts certain strings to bools. If we can unambiguously convert back, do so before checking
# the value. If we can't figure this out, module author is responsible.
lowered_choices = None
if param[k] == 'False':
lowered_choices = lenient_lowercase(choices)
overlap = BOOLEANS_FALSE.intersection(choices)
if len(overlap) == 1:
# Extract from a set
(param[k],) = overlap
if param[k] == 'True':
if lowered_choices is None:
lowered_choices = lenient_lowercase(choices)
overlap = BOOLEANS_TRUE.intersection(choices)
if len(overlap) == 1:
(param[k],) = overlap
if param[k] not in choices:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one of: %s, got: %s" % (k, choices_str, param[k])
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
else:
msg = "internal error: choices for argument %s are not iterable: %s" % (k, choices)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def safe_eval(self, value, locals=None, include_exceptions=False):
return safe_eval(value, locals, include_exceptions)
def _check_type_str(self, value):
opts = {
'error': False,
'warn': False,
'ignore': True
}
# Ignore, warn, or error when converting to a string.
allow_conversion = opts.get(self._string_conversion_action, True)
try:
return check_type_str(value, allow_conversion)
except TypeError:
common_msg = 'quote the entire value to ensure it does not change.'
if self._string_conversion_action == 'error':
msg = common_msg.capitalize()
raise TypeError(to_native(msg))
elif self._string_conversion_action == 'warn':
msg = ('The value {0!r} (type {0.__class__.__name__}) in a string field was converted to {1!r} (type string). '
'If this does not look like what you expect, {2}').format(value, to_text(value), common_msg)
self.warn(to_native(msg))
return to_native(value, errors='surrogate_or_strict')
def _check_type_list(self, value):
return check_type_list(value)
def _check_type_dict(self, value):
return check_type_dict(value)
def _check_type_bool(self, value):
return check_type_bool(value)
def _check_type_int(self, value):
return check_type_int(value)
def _check_type_float(self, value):
return check_type_float(value)
def _check_type_path(self, value):
return check_type_path(value)
def _check_type_jsonarg(self, value):
return check_type_jsonarg(value)
def _check_type_raw(self, value):
return check_type_raw(value)
def _check_type_bytes(self, value):
return check_type_bytes(value)
def _check_type_bits(self, value):
return check_type_bits(value)
def _handle_options(self, argument_spec=None, params=None, prefix=''):
''' deal with options to create sub spec '''
if argument_spec is None:
argument_spec = self.argument_spec
if params is None:
params = self.params
for (k, v) in argument_spec.items():
wanted = v.get('type', None)
if wanted == 'dict' or (wanted == 'list' and v.get('elements', '') == 'dict'):
spec = v.get('options', None)
if v.get('apply_defaults', False):
if spec is not None:
if params.get(k) is None:
params[k] = {}
else:
continue
elif spec is None or k not in params or params[k] is None:
continue
self._options_context.append(k)
if isinstance(params[k], dict):
elements = [params[k]]
else:
elements = params[k]
for idx, param in enumerate(elements):
if not isinstance(param, dict):
self.fail_json(msg="value of %s must be of type dict or list of dict" % k)
new_prefix = prefix + k
if wanted == 'list':
new_prefix += '[%d]' % idx
new_prefix += '.'
self._set_fallbacks(spec, param)
options_aliases = self._handle_aliases(spec, param, option_prefix=new_prefix)
options_legal_inputs = list(spec.keys()) + list(options_aliases.keys())
self._check_arguments(spec, param, options_legal_inputs)
# check exclusive early
if not self.bypass_checks:
self._check_mutually_exclusive(v.get('mutually_exclusive', None), param)
self._set_defaults(pre=True, spec=spec, param=param)
if not self.bypass_checks:
self._check_required_arguments(spec, param)
self._check_argument_types(spec, param)
self._check_argument_values(spec, param)
self._check_required_together(v.get('required_together', None), param)
self._check_required_one_of(v.get('required_one_of', None), param)
self._check_required_if(v.get('required_if', None), param)
self._check_required_by(v.get('required_by', None), param)
self._set_defaults(pre=False, spec=spec, param=param)
# handle multi level options (sub argspec)
self._handle_options(spec, param, new_prefix)
self._options_context.pop()
def _get_wanted_type(self, wanted, k):
if not callable(wanted):
if wanted is None:
# Mostly we want to default to str.
# For values set to None explicitly, return None instead as
# that allows a user to unset a parameter
wanted = 'str'
try:
type_checker = self._CHECK_ARGUMENT_TYPES_DISPATCHER[wanted]
except KeyError:
self.fail_json(msg="implementation error: unknown type %s requested for %s" % (wanted, k))
else:
# set the type_checker to the callable, and reset wanted to the callable's name (or type if it doesn't have one, ala MagicMock)
type_checker = wanted
wanted = getattr(wanted, '__name__', to_native(type(wanted)))
return type_checker, wanted
def _handle_elements(self, wanted, param, values):
type_checker, wanted_name = self._get_wanted_type(wanted, param)
validated_params = []
for value in values:
try:
validated_params.append(type_checker(value))
except (TypeError, ValueError) as e:
msg = "Elements value for option %s" % param
if self._options_context:
msg += " found in '%s'" % " -> ".join(self._options_context)
msg += " is of type %s and we were unable to convert to %s: %s" % (type(value), wanted_name, to_native(e))
self.fail_json(msg=msg)
return validated_params
def _check_argument_types(self, spec=None, param=None):
''' ensure all arguments have the requested type '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
wanted = v.get('type', None)
if k not in param:
continue
value = param[k]
if value is None:
continue
type_checker, wanted_name = self._get_wanted_type(wanted, k)
try:
param[k] = type_checker(value)
wanted_elements = v.get('elements', None)
if wanted_elements:
if wanted != 'list' or not isinstance(param[k], list):
msg = "Invalid type %s for option '%s'" % (wanted_name, param)
if self._options_context:
msg += " found in '%s'." % " -> ".join(self._options_context)
msg += ", elements value check is supported only with 'list' type"
self.fail_json(msg=msg)
param[k] = self._handle_elements(wanted_elements, k, param[k])
except (TypeError, ValueError) as e:
msg = "argument %s is of type %s" % (k, type(value))
if self._options_context:
msg += " found in '%s'." % " -> ".join(self._options_context)
msg += " and we were unable to convert to %s: %s" % (wanted_name, to_native(e))
self.fail_json(msg=msg)
def _set_defaults(self, pre=True, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
default = v.get('default', None)
if pre is True:
# this prevents setting defaults on required items
if default is not None and k not in param:
param[k] = default
else:
# make sure things without a default still get set None
if k not in param:
param[k] = default
def _set_fallbacks(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
fallback = v.get('fallback', (None,))
fallback_strategy = fallback[0]
fallback_args = []
fallback_kwargs = {}
if k not in param and fallback_strategy is not None:
for item in fallback[1:]:
if isinstance(item, dict):
fallback_kwargs = item
else:
fallback_args = item
try:
param[k] = fallback_strategy(*fallback_args, **fallback_kwargs)
except AnsibleFallbackNotFound:
continue
def _load_params(self):
''' read the input and set the params attribute.
This method is for backwards compatibility. The guts of the function
were moved out in 2.1 so that custom modules could read the parameters.
'''
# debug overrides to read args from file or cmdline
self.params = _load_params()
def _log_to_syslog(self, msg):
if HAS_SYSLOG:
module = 'ansible-%s' % self._name
facility = getattr(syslog, self._syslog_facility, syslog.LOG_USER)
syslog.openlog(str(module), 0, facility)
syslog.syslog(syslog.LOG_INFO, msg)
def debug(self, msg):
if self._debug:
self.log('[debug] %s' % msg)
def log(self, msg, log_args=None):
if not self.no_log:
if log_args is None:
log_args = dict()
module = 'ansible-%s' % self._name
if isinstance(module, binary_type):
module = module.decode('utf-8', 'replace')
# 6655 - allow for accented characters
if not isinstance(msg, (binary_type, text_type)):
raise TypeError("msg should be a string (got %s)" % type(msg))
# We want journal to always take text type
# syslog takes bytes on py2, text type on py3
if isinstance(msg, binary_type):
journal_msg = remove_values(msg.decode('utf-8', 'replace'), self.no_log_values)
else:
# TODO: surrogateescape is a danger here on Py3
journal_msg = remove_values(msg, self.no_log_values)
if PY3:
syslog_msg = journal_msg
else:
syslog_msg = journal_msg.encode('utf-8', 'replace')
if has_journal:
journal_args = [("MODULE", os.path.basename(__file__))]
for arg in log_args:
journal_args.append((arg.upper(), str(log_args[arg])))
try:
if HAS_SYSLOG:
# If syslog_facility specified, it needs to convert
# from the facility name to the facility code, and
# set it as SYSLOG_FACILITY argument of journal.send()
facility = getattr(syslog,
self._syslog_facility,
syslog.LOG_USER) >> 3
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
SYSLOG_FACILITY=facility,
**dict(journal_args))
else:
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
**dict(journal_args))
except IOError:
# fall back to syslog since logging to journal failed
self._log_to_syslog(syslog_msg)
else:
self._log_to_syslog(syslog_msg)
def _log_invocation(self):
''' log that ansible ran the module '''
# TODO: generalize a separate log function and make log_invocation use it
# Sanitize possible password argument when logging.
log_args = dict()
for param in self.params:
canon = self.aliases.get(param, param)
arg_opts = self.argument_spec.get(canon, {})
no_log = arg_opts.get('no_log', None)
# try to proactively capture password/passphrase fields
if no_log is None and PASSWORD_MATCH.search(param):
log_args[param] = 'NOT_LOGGING_PASSWORD'
self.warn('Module did not set no_log for %s' % param)
elif self.boolean(no_log):
log_args[param] = 'NOT_LOGGING_PARAMETER'
else:
param_val = self.params[param]
if not isinstance(param_val, (text_type, binary_type)):
param_val = str(param_val)
elif isinstance(param_val, text_type):
param_val = param_val.encode('utf-8')
log_args[param] = heuristic_log_sanitize(param_val, self.no_log_values)
msg = ['%s=%s' % (to_native(arg), to_native(val)) for arg, val in log_args.items()]
if msg:
msg = 'Invoked with %s' % ' '.join(msg)
else:
msg = 'Invoked'
self.log(msg, log_args=log_args)
def _set_cwd(self):
try:
cwd = os.getcwd()
if not os.access(cwd, os.F_OK | os.R_OK):
raise Exception()
return cwd
except Exception:
# we don't have access to the cwd, probably because of sudo.
# Try and move to a neutral location to prevent errors
for cwd in [self.tmpdir, os.path.expandvars('$HOME'), tempfile.gettempdir()]:
try:
if os.access(cwd, os.F_OK | os.R_OK):
os.chdir(cwd)
return cwd
except Exception:
pass
# we won't error here, as it may *not* be a problem,
# and we don't want to break modules unnecessarily
return None
def get_bin_path(self, arg, required=False, opt_dirs=None):
'''
Find system executable in PATH.
:param arg: The executable to find.
:param required: if executable is not found and required is ``True``, fail_json
:param opt_dirs: optional list of directories to search in addition to ``PATH``
:returns: if found return full path; otherwise return None
'''
bin_path = None
try:
bin_path = get_bin_path(arg=arg, opt_dirs=opt_dirs)
except ValueError as e:
if required:
self.fail_json(msg=to_text(e))
else:
return bin_path
return bin_path
def boolean(self, arg):
'''Convert the argument to a boolean'''
if arg is None:
return arg
try:
return boolean(arg)
except TypeError as e:
self.fail_json(msg=to_native(e))
def jsonify(self, data):
try:
return jsonify(data)
except UnicodeError as e:
self.fail_json(msg=to_text(e))
def from_json(self, data):
return json.loads(data)
def add_cleanup_file(self, path):
if path not in self.cleanup_files:
self.cleanup_files.append(path)
def do_cleanup_files(self):
for path in self.cleanup_files:
self.cleanup(path)
def _return_formatted(self, kwargs):
self.add_path_info(kwargs)
if 'invocation' not in kwargs:
kwargs['invocation'] = {'module_args': self.params}
if 'warnings' in kwargs:
if isinstance(kwargs['warnings'], list):
for w in kwargs['warnings']:
self.warn(w)
else:
self.warn(kwargs['warnings'])
warnings = get_warning_messages()
if warnings:
kwargs['warnings'] = warnings
if 'deprecations' in kwargs:
if isinstance(kwargs['deprecations'], list):
for d in kwargs['deprecations']:
if isinstance(d, SEQUENCETYPE) and len(d) == 2:
self.deprecate(d[0], version=d[1])
elif isinstance(d, Mapping):
self.deprecate(d['msg'], version=d.get('version', None))
else:
self.deprecate(d) # pylint: disable=ansible-deprecated-no-version
else:
self.deprecate(kwargs['deprecations']) # pylint: disable=ansible-deprecated-no-version
deprecations = get_deprecation_messages()
if deprecations:
kwargs['deprecations'] = deprecations
kwargs = remove_values(kwargs, self.no_log_values)
print('\n%s' % self.jsonify(kwargs))
def exit_json(self, **kwargs):
''' return from the module, without error '''
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(0)
def fail_json(self, msg, **kwargs):
''' return from the module, with an error message '''
kwargs['failed'] = True
kwargs['msg'] = msg
# Add traceback if debug or high verbosity and it is missing
# NOTE: Badly named as exception, it really always has been a traceback
if 'exception' not in kwargs and sys.exc_info()[2] and (self._debug or self._verbosity >= 3):
if PY2:
# On Python 2 this is the last (stack frame) exception and as such may be unrelated to the failure
kwargs['exception'] = 'WARNING: The below traceback may *not* be related to the actual failure.\n' +\
''.join(traceback.format_tb(sys.exc_info()[2]))
else:
kwargs['exception'] = ''.join(traceback.format_tb(sys.exc_info()[2]))
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(1)
def fail_on_missing_params(self, required_params=None):
if not required_params:
return
try:
check_missing_parameters(self.params, required_params)
except TypeError as e:
self.fail_json(msg=to_native(e))
def digest_from_file(self, filename, algorithm):
''' Return hex digest of local file for a digest_method specified by name, or None if file is not present. '''
b_filename = to_bytes(filename, errors='surrogate_or_strict')
if not os.path.exists(b_filename):
return None
if os.path.isdir(b_filename):
self.fail_json(msg="attempted to take checksum of directory: %s" % filename)
# preserve old behaviour where the third parameter was a hash algorithm object
if hasattr(algorithm, 'hexdigest'):
digest_method = algorithm
else:
try:
digest_method = AVAILABLE_HASH_ALGORITHMS[algorithm]()
except KeyError:
self.fail_json(msg="Could not hash file '%s' with algorithm '%s'. Available algorithms: %s" %
(filename, algorithm, ', '.join(AVAILABLE_HASH_ALGORITHMS)))
blocksize = 64 * 1024
infile = open(os.path.realpath(b_filename), 'rb')
block = infile.read(blocksize)
while block:
digest_method.update(block)
block = infile.read(blocksize)
infile.close()
return digest_method.hexdigest()
def md5(self, filename):
''' Return MD5 hex digest of local file using digest_from_file().
Do not use this function unless you have no other choice for:
1) Optional backwards compatibility
2) Compatibility with a third party protocol
This function will not work on systems complying with FIPS-140-2.
Most uses of this function can use the module.sha1 function instead.
'''
if 'md5' not in AVAILABLE_HASH_ALGORITHMS:
raise ValueError('MD5 not available. Possibly running in FIPS mode')
return self.digest_from_file(filename, 'md5')
def sha1(self, filename):
''' Return SHA1 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha1')
def sha256(self, filename):
''' Return SHA-256 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha256')
def backup_local(self, fn):
'''make a date-marked backup of the specified file, return True or False on success or failure'''
backupdest = ''
if os.path.exists(fn):
# backups named basename.PID.YYYY-MM-DD@HH:MM:SS~
ext = time.strftime("%Y-%m-%d@%H:%M:%S~", time.localtime(time.time()))
backupdest = '%s.%s.%s' % (fn, os.getpid(), ext)
try:
self.preserved_copy(fn, backupdest)
except (shutil.Error, IOError) as e:
self.fail_json(msg='Could not make backup of %s to %s: %s' % (fn, backupdest, to_native(e)))
return backupdest
def cleanup(self, tmpfile):
if os.path.exists(tmpfile):
try:
os.unlink(tmpfile)
except OSError as e:
sys.stderr.write("could not cleanup %s: %s" % (tmpfile, to_native(e)))
def preserved_copy(self, src, dest):
"""Copy a file with preserved ownership, permissions and context"""
# shutil.copy2(src, dst)
# Similar to shutil.copy(), but metadata is copied as well - in fact,
# this is just shutil.copy() followed by copystat(). This is similar
# to the Unix command cp -p.
#
# shutil.copystat(src, dst)
# Copy the permission bits, last access time, last modification time,
# and flags from src to dst. The file contents, owner, and group are
# unaffected. src and dst are path names given as strings.
shutil.copy2(src, dest)
# Set the context
if self.selinux_enabled():
context = self.selinux_context(src)
self.set_context_if_different(dest, context, False)
# chown it
try:
dest_stat = os.stat(src)
tmp_stat = os.stat(dest)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(dest, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
# Set the attributes
current_attribs = self.get_file_attributes(src)
current_attribs = current_attribs.get('attr_flags', '')
self.set_attributes_if_different(dest, current_attribs, True)
def atomic_move(self, src, dest, unsafe_writes=False):
'''atomically move src to dest, copying attributes from dest, returns true on success
it uses os.rename to ensure this as it is an atomic operation, rest of the function is
to work around limitations, corner cases and ensure selinux context is saved if possible'''
context = None
dest_stat = None
b_src = to_bytes(src, errors='surrogate_or_strict')
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.exists(b_dest):
try:
dest_stat = os.stat(b_dest)
# copy mode and ownership
os.chmod(b_src, dest_stat.st_mode & PERM_BITS)
os.chown(b_src, dest_stat.st_uid, dest_stat.st_gid)
# try to copy flags if possible
if hasattr(os, 'chflags') and hasattr(dest_stat, 'st_flags'):
try:
os.chflags(b_src, dest_stat.st_flags)
except OSError as e:
for err in 'EOPNOTSUPP', 'ENOTSUP':
if hasattr(errno, err) and e.errno == getattr(errno, err):
break
else:
raise
except OSError as e:
if e.errno != errno.EPERM:
raise
if self.selinux_enabled():
context = self.selinux_context(dest)
else:
if self.selinux_enabled():
context = self.selinux_default_context(dest)
creating = not os.path.exists(b_dest)
try:
# Optimistically try a rename, solves some corner cases and can avoid useless work, throws exception if not atomic.
os.rename(b_src, b_dest)
except (IOError, OSError) as e:
if e.errno not in [errno.EPERM, errno.EXDEV, errno.EACCES, errno.ETXTBSY, errno.EBUSY]:
# only try workarounds for errno 18 (cross device), 1 (not permitted), 13 (permission denied)
# and 26 (text file busy) which happens on vagrant synced folders and other 'exotic' non posix file systems
self.fail_json(msg='Could not replace file: %s to %s: %s' % (src, dest, to_native(e)),
exception=traceback.format_exc())
else:
# Use bytes here. In the shippable CI, this fails with
# a UnicodeError with surrogateescape'd strings for an unknown
# reason (doesn't happen in a local Ubuntu16.04 VM)
b_dest_dir = os.path.dirname(b_dest)
b_suffix = os.path.basename(b_dest)
error_msg = None
tmp_dest_name = None
try:
tmp_dest_fd, tmp_dest_name = tempfile.mkstemp(prefix=b'.ansible_tmp',
dir=b_dest_dir, suffix=b_suffix)
except (OSError, IOError) as e:
error_msg = 'The destination directory (%s) is not writable by the current user. Error was: %s' % (os.path.dirname(dest), to_native(e))
except TypeError:
# We expect that this is happening because python3.4.x and
# below can't handle byte strings in mkstemp(). Traceback
# would end in something like:
# file = _os.path.join(dir, pre + name + suf)
# TypeError: can't concat bytes to str
error_msg = ('Failed creating tmp file for atomic move. This usually happens when using Python3 less than Python3.5. '
'Please use Python2.x or Python3.5 or greater.')
finally:
if error_msg:
if unsafe_writes:
self._unsafe_writes(b_src, b_dest)
else:
self.fail_json(msg=error_msg, exception=traceback.format_exc())
if tmp_dest_name:
b_tmp_dest_name = to_bytes(tmp_dest_name, errors='surrogate_or_strict')
try:
try:
# close tmp file handle before file operations to prevent text file busy errors on vboxfs synced folders (windows host)
os.close(tmp_dest_fd)
# leaves tmp file behind when sudo and not root
try:
shutil.move(b_src, b_tmp_dest_name)
except OSError:
# cleanup will happen by 'rm' of tmpdir
# copy2 will preserve some metadata
shutil.copy2(b_src, b_tmp_dest_name)
if self.selinux_enabled():
self.set_context_if_different(
b_tmp_dest_name, context, False)
try:
tmp_stat = os.stat(b_tmp_dest_name)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(b_tmp_dest_name, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
try:
os.rename(b_tmp_dest_name, b_dest)
except (shutil.Error, OSError, IOError) as e:
if unsafe_writes and e.errno == errno.EBUSY:
self._unsafe_writes(b_tmp_dest_name, b_dest)
else:
self.fail_json(msg='Unable to make %s into to %s, failed final rename from %s: %s' %
(src, dest, b_tmp_dest_name, to_native(e)),
exception=traceback.format_exc())
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Failed to replace file: %s to %s: %s' % (src, dest, to_native(e)),
exception=traceback.format_exc())
finally:
self.cleanup(b_tmp_dest_name)
if creating:
# make sure the file has the correct permissions
# based on the current value of umask
umask = os.umask(0)
os.umask(umask)
os.chmod(b_dest, DEFAULT_PERM & ~umask)
try:
os.chown(b_dest, os.geteuid(), os.getegid())
except OSError:
# We're okay with trying our best here. If the user is not
# root (or old Unices) they won't be able to chown.
pass
if self.selinux_enabled():
# rename might not preserve context
self.set_context_if_different(dest, context, False)
def _unsafe_writes(self, src, dest):
# sadly there are some situations where we cannot ensure atomicity, but only if
# the user insists and we get the appropriate error we update the file unsafely
try:
out_dest = in_src = None
try:
out_dest = open(dest, 'wb')
in_src = open(src, 'rb')
shutil.copyfileobj(in_src, out_dest)
finally: # assuring closed files in 2.4 compatible way
if out_dest:
out_dest.close()
if in_src:
in_src.close()
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Could not write data to file (%s) from (%s): %s' % (dest, src, to_native(e)),
exception=traceback.format_exc())
def _clean_args(self, args):
if not self._clean:
# create a printable version of the command for use in reporting later,
# which strips out things like passwords from the args list
to_clean_args = args
if PY2:
if isinstance(args, text_type):
to_clean_args = to_bytes(args)
else:
if isinstance(args, binary_type):
to_clean_args = to_text(args)
if isinstance(args, (text_type, binary_type)):
to_clean_args = shlex.split(to_clean_args)
clean_args = []
is_passwd = False
for arg in (to_native(a) for a in to_clean_args):
if is_passwd:
is_passwd = False
clean_args.append('********')
continue
if PASSWD_ARG_RE.match(arg):
sep_idx = arg.find('=')
if sep_idx > -1:
clean_args.append('%s=********' % arg[:sep_idx])
continue
else:
is_passwd = True
arg = heuristic_log_sanitize(arg, self.no_log_values)
clean_args.append(arg)
self._clean = ' '.join(shlex_quote(arg) for arg in clean_args)
return self._clean
def _restore_signal_handlers(self):
# Reset SIGPIPE to SIG_DFL, otherwise in Python2.7 it gets ignored in subprocesses.
if PY2 and sys.platform != 'win32':
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None,
use_unsafe_shell=False, prompt_regex=None, environ_update=None, umask=None, encoding='utf-8', errors='surrogate_or_strict',
expand_user_and_vars=True, pass_fds=None, before_communicate_callback=None):
'''
Execute a command, returns rc, stdout, and stderr.
:arg args: is the command to run
* If args is a list, the command will be run with shell=False.
* If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False
* If args is a string and use_unsafe_shell=True it runs with shell=True.
:kw check_rc: Whether to call fail_json in case of non zero RC.
Default False
:kw close_fds: See documentation for subprocess.Popen(). Default True
:kw executable: See documentation for subprocess.Popen(). Default None
:kw data: If given, information to write to the stdin of the command
:kw binary_data: If False, append a newline to the data. Default False
:kw path_prefix: If given, additional path to find the command in.
This adds to the PATH environment variable so helper commands in
the same directory can also be found
:kw cwd: If given, working directory to run the command inside
:kw use_unsafe_shell: See `args` parameter. Default False
:kw prompt_regex: Regex string (not a compiled regex) which can be
used to detect prompts in the stdout which would otherwise cause
the execution to hang (especially if no input data is specified)
:kw environ_update: dictionary to *update* os.environ with
:kw umask: Umask to be used when running the command. Default None
:kw encoding: Since we return native strings, on python3 we need to
know the encoding to use to transform from bytes to text. If you
want to always get bytes back, use encoding=None. The default is
"utf-8". This does not affect transformation of strings given as
args.
:kw errors: Since we return native strings, on python3 we need to
transform stdout and stderr from bytes to text. If the bytes are
undecodable in the ``encoding`` specified, then use this error
handler to deal with them. The default is ``surrogate_or_strict``
which means that the bytes will be decoded using the
surrogateescape error handler if available (available on all
python3 versions we support) otherwise a UnicodeError traceback
will be raised. This does not affect transformations of strings
given as args.
:kw expand_user_and_vars: When ``use_unsafe_shell=False`` this argument
dictates whether ``~`` is expanded in paths and environment variables
are expanded before running the command. When ``True`` a string such as
``$SHELL`` will be expanded regardless of escaping. When ``False`` and
``use_unsafe_shell=False`` no path or variable expansion will be done.
:kw pass_fds: When running on Python 3 this argument
dictates which file descriptors should be passed
to an underlying ``Popen`` constructor. On Python 2, this will
set ``close_fds`` to False.
:kw before_communicate_callback: This function will be called
after ``Popen`` object will be created
but before communicating to the process.
(``Popen`` object will be passed to callback as a first argument)
:returns: A 3-tuple of return code (integer), stdout (native string),
and stderr (native string). On python2, stdout and stderr are both
byte strings. On python3, stdout and stderr are text strings converted
according to the encoding and errors parameters. If you want byte
strings on python3, use encoding=None to turn decoding to text off.
'''
# used by clean args later on
self._clean = None
if not isinstance(args, (list, binary_type, text_type)):
msg = "Argument 'args' to run_command must be list or string"
self.fail_json(rc=257, cmd=args, msg=msg)
shell = False
if use_unsafe_shell:
# stringify args for unsafe/direct shell usage
if isinstance(args, list):
args = b" ".join([to_bytes(shlex_quote(x), errors='surrogate_or_strict') for x in args])
else:
args = to_bytes(args, errors='surrogate_or_strict')
# not set explicitly, check if set by controller
if executable:
executable = to_bytes(executable, errors='surrogate_or_strict')
args = [executable, b'-c', args]
elif self._shell not in (None, '/bin/sh'):
args = [to_bytes(self._shell, errors='surrogate_or_strict'), b'-c', args]
else:
shell = True
else:
# ensure args are a list
if isinstance(args, (binary_type, text_type)):
# On python2.6 and below, shlex has problems with text type
# On python3, shlex needs a text type.
if PY2:
args = to_bytes(args, errors='surrogate_or_strict')
elif PY3:
args = to_text(args, errors='surrogateescape')
args = shlex.split(args)
# expand ``~`` in paths, and all environment vars
if expand_user_and_vars:
args = [to_bytes(os.path.expanduser(os.path.expandvars(x)), errors='surrogate_or_strict') for x in args if x is not None]
else:
args = [to_bytes(x, errors='surrogate_or_strict') for x in args if x is not None]
prompt_re = None
if prompt_regex:
if isinstance(prompt_regex, text_type):
if PY3:
prompt_regex = to_bytes(prompt_regex, errors='surrogateescape')
elif PY2:
prompt_regex = to_bytes(prompt_regex, errors='surrogate_or_strict')
try:
prompt_re = re.compile(prompt_regex, re.MULTILINE)
except re.error:
self.fail_json(msg="invalid prompt regular expression given to run_command")
rc = 0
msg = None
st_in = None
# Manipulate the environ we'll send to the new process
old_env_vals = {}
# We can set this from both an attribute and per call
for key, val in self.run_command_environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if environ_update:
for key, val in environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if path_prefix:
old_env_vals['PATH'] = os.environ['PATH']
os.environ['PATH'] = "%s:%s" % (path_prefix, os.environ['PATH'])
# If using test-module.py and explode, the remote lib path will resemble:
# /tmp/test_module_scratch/debug_dir/ansible/module_utils/basic.py
# If using ansible or ansible-playbook with a remote system:
# /tmp/ansible_vmweLQ/ansible_modlib.zip/ansible/module_utils/basic.py
# Clean out python paths set by ansiballz
if 'PYTHONPATH' in os.environ:
pypaths = os.environ['PYTHONPATH'].split(':')
pypaths = [x for x in pypaths
if not x.endswith('/ansible_modlib.zip') and
not x.endswith('/debug_dir')]
os.environ['PYTHONPATH'] = ':'.join(pypaths)
if not os.environ['PYTHONPATH']:
del os.environ['PYTHONPATH']
if data:
st_in = subprocess.PIPE
kwargs = dict(
executable=executable,
shell=shell,
close_fds=close_fds,
stdin=st_in,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=self._restore_signal_handlers,
)
if PY3 and pass_fds:
kwargs["pass_fds"] = pass_fds
elif PY2 and pass_fds:
kwargs['close_fds'] = False
# store the pwd
prev_dir = os.getcwd()
# make sure we're in the right working directory
if cwd and os.path.isdir(cwd):
cwd = to_bytes(os.path.abspath(os.path.expanduser(cwd)), errors='surrogate_or_strict')
kwargs['cwd'] = cwd
try:
os.chdir(cwd)
except (OSError, IOError) as e:
self.fail_json(rc=e.errno, msg="Could not open %s, %s" % (cwd, to_native(e)),
exception=traceback.format_exc())
old_umask = None
if umask:
old_umask = os.umask(umask)
try:
if self._debug:
self.log('Executing: ' + self._clean_args(args))
cmd = subprocess.Popen(args, **kwargs)
if before_communicate_callback:
before_communicate_callback(cmd)
# the communication logic here is essentially taken from that
# of the _communicate() function in ssh.py
stdout = b''
stderr = b''
selector = selectors.DefaultSelector()
selector.register(cmd.stdout, selectors.EVENT_READ)
selector.register(cmd.stderr, selectors.EVENT_READ)
if os.name == 'posix':
fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK)
fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK)
if data:
if not binary_data:
data += '\n'
if isinstance(data, text_type):
data = to_bytes(data)
cmd.stdin.write(data)
cmd.stdin.close()
while True:
events = selector.select(1)
for key, event in events:
b_chunk = key.fileobj.read()
if b_chunk == b(''):
selector.unregister(key.fileobj)
if key.fileobj == cmd.stdout:
stdout += b_chunk
elif key.fileobj == cmd.stderr:
stderr += b_chunk
# if we're checking for prompts, do it now
if prompt_re:
if prompt_re.search(stdout) and not data:
if encoding:
stdout = to_native(stdout, encoding=encoding, errors=errors)
return (257, stdout, "A prompt was encountered while running a command, but no input data was specified")
# only break out if no pipes are left to read or
# the pipes are completely read and
# the process is terminated
if (not events or not selector.get_map()) and cmd.poll() is not None:
break
# No pipes are left to read but process is not yet terminated
# Only then it is safe to wait for the process to be finished
# NOTE: Actually cmd.poll() is always None here if no selectors are left
elif not selector.get_map() and cmd.poll() is None:
cmd.wait()
# The process is terminated. Since no pipes to read from are
# left, there is no need to call select() again.
break
cmd.stdout.close()
cmd.stderr.close()
selector.close()
rc = cmd.returncode
except (OSError, IOError) as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(e)))
self.fail_json(rc=e.errno, msg=to_native(e), cmd=self._clean_args(args))
except Exception as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(traceback.format_exc())))
self.fail_json(rc=257, msg=to_native(e), exception=traceback.format_exc(), cmd=self._clean_args(args))
# Restore env settings
for key, val in old_env_vals.items():
if val is None:
del os.environ[key]
else:
os.environ[key] = val
if old_umask:
os.umask(old_umask)
if rc != 0 and check_rc:
msg = heuristic_log_sanitize(stderr.rstrip(), self.no_log_values)
self.fail_json(cmd=self._clean_args(args), rc=rc, stdout=stdout, stderr=stderr, msg=msg)
# reset the pwd
os.chdir(prev_dir)
if encoding is not None:
return (rc, to_native(stdout, encoding=encoding, errors=errors),
to_native(stderr, encoding=encoding, errors=errors))
return (rc, stdout, stderr)
def append_to_file(self, filename, str):
filename = os.path.expandvars(os.path.expanduser(filename))
fh = open(filename, 'a')
fh.write(str)
fh.close()
def bytes_to_human(self, size):
return bytes_to_human(size)
# for backwards compatibility
pretty_bytes = bytes_to_human
def human_to_bytes(self, number, isbits=False):
return human_to_bytes(number, isbits)
#
# Backwards compat
#
# In 2.0, moved from inside the module to the toplevel
is_executable = is_executable
@staticmethod
def get_buffer_size(fd):
try:
# 1032 == FZ_GETPIPE_SZ
buffer_size = fcntl.fcntl(fd, 1032)
except Exception:
try:
# not as exact as above, but should be good enough for most platforms that fail the previous call
buffer_size = select.PIPE_BUF
except Exception:
buffer_size = 9000 # use sane default JIC
return buffer_size
def get_module_path():
return os.path.dirname(os.path.realpath(__file__))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,400 |
uri module set string with masked content into content and json output
|
##### SUMMARY
uri module set string with masked content into content and json output
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
uri
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0
config file = None
configured module search path = ['/Users/hungluong/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/hungluong/Library/Python/3.7/lib/python/site-packages/ansible
executable location = /Users/hungluong/Library/Python/3.7/bin/ansible
python version = 3.7.6 (default, Dec 30 2019, 19:38:26) [Clang 11.0.0 (clang-1100.0.33.16)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
N/A
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
connection: local
tasks:
- name: send request
uri:
url: "https://postman-echo.com/get?name=something-with-admin"
user: admin
password: admin
method: GET
force_basic_auth: yes
return_content: yes
status_code: 200
register: response
- name: extract value
vars:
query: args.name
set_fact:
value_content: "{{ response.content }}"
value_content_parsed: "{{ response.content | from_json | json_query(query) }}"
value_json: "{{ response.json.args.name }}"
- name: debug
debug:
msg:
- "{{ 'something-with-admin' in value_json }}"
- "{{ 'something-with-admin' in value_content }}"
- "{{ 'something-with-admin' in value_content_parsed }}"
- "{{ 'something-with-********' in value_json }}"
- "{{ 'something-with-********' in value_content }}"
- "{{ 'something-with-********' in value_content_parsed }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
The module should return the json/content value with the correct values
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The module seems to apply sensitive info masking ('********') to value matching username/password in its output
<!--- Paste verbatim command output between quotes -->
```paste below
"msg": [
false,
false,
false,
true,
false,
true
]
```
|
https://github.com/ansible/ansible/issues/68400
|
https://github.com/ansible/ansible/pull/69653
|
cfd301a586302785fa888117deaf06955a240cdd
|
e0f25a2b1f9e6c21f751ba0ed2dc2eee2152983e
| 2020-03-23T11:01:05Z |
python
| 2020-05-21T20:17:57Z |
test/integration/targets/uri/tasks/main.yml
|
# test code for the uri module
# (c) 2014, Leonid Evdokimov <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <https://www.gnu.org/licenses/>.
- name: set role facts
set_fact:
http_port: 15260
files_dir: '{{ output_dir|expanduser }}/files'
checkout_dir: '{{ output_dir }}/git'
- name: create a directory to serve files from
file:
dest: "{{ files_dir }}"
state: directory
- copy:
src: "{{ item }}"
dest: "{{files_dir}}/{{ item }}"
with_sequence: start=0 end=4 format=pass%d.json
- copy:
src: "{{ item }}"
dest: "{{files_dir}}/{{ item }}"
with_sequence: start=0 end=30 format=fail%d.json
- copy:
src: "testserver.py"
dest: "{{ output_dir }}/testserver.py"
- name: start SimpleHTTPServer
shell: cd {{ files_dir }} && {{ ansible_python.executable }} {{ output_dir}}/testserver.py {{ http_port }}
async: 120 # this test set can take ~1m to run on FreeBSD (via Shippable)
poll: 0
- wait_for: port={{ http_port }}
- name: checksum pass_json
stat: path={{ files_dir }}/{{ item }}.json get_checksum=yes
register: pass_checksum
with_sequence: start=0 end=4 format=pass%d
- name: fetch pass_json
uri: return_content=yes url=http://localhost:{{ http_port }}/{{ item }}.json
register: fetch_pass_json
with_sequence: start=0 end=4 format=pass%d
- name: check pass_json
assert:
that:
- '"json" in item.1'
- item.0.stat.checksum == item.1.content | checksum
with_together:
- "{{pass_checksum.results}}"
- "{{fetch_pass_json.results}}"
- name: checksum fail_json
stat: path={{ files_dir }}/{{ item }}.json get_checksum=yes
register: fail_checksum
with_sequence: start=0 end=30 format=fail%d
- name: fetch fail_json
uri: return_content=yes url=http://localhost:{{ http_port }}/{{ item }}.json
register: fail
with_sequence: start=0 end=30 format=fail%d
- name: check fail_json
assert:
that:
- item.0.stat.checksum == item.1.content | checksum
- '"json" not in item.1'
with_together:
- "{{fail_checksum.results}}"
- "{{fail.results}}"
- name: test https fetch to a site with mismatched hostname and certificate
uri:
url: "https://{{ badssl_host }}/"
dest: "{{ output_dir }}/shouldnotexist.html"
ignore_errors: True
register: result
- stat:
path: "{{ output_dir }}/shouldnotexist.html"
register: stat_result
- name: Assert that the file was not downloaded
assert:
that:
- result.failed == true
- "'Failed to validate the SSL certificate' in result.msg or 'Hostname mismatch' in result.msg or (result.msg is match('hostname .* doesn.t match .*'))"
- stat_result.stat.exists == false
- result.status is defined
- result.status == -1
- result.url == 'https://' ~ badssl_host ~ '/'
- name: Clean up any cruft from the results directory
file:
name: "{{ output_dir }}/kreitz.html"
state: absent
- name: test https fetch to a site with mismatched hostname and certificate and validate_certs=no
uri:
url: "https://{{ badssl_host }}/"
dest: "{{ output_dir }}/kreitz.html"
validate_certs: no
register: result
- stat:
path: "{{ output_dir }}/kreitz.html"
register: stat_result
- name: Assert that the file was downloaded
assert:
that:
- "stat_result.stat.exists == true"
- "result.changed == true"
- name: test redirect without follow_redirects
uri:
url: 'https://{{ httpbin_host }}/redirect/2'
follow_redirects: 'none'
status_code: 302
register: result
- name: Assert location header
assert:
that:
- 'result.location|default("") == "https://{{ httpbin_host }}/relative-redirect/1"'
- name: Check SSL with redirect
uri:
url: 'https://{{ httpbin_host }}/redirect/2'
register: result
- name: Assert SSL with redirect
assert:
that:
- 'result.url|default("") == "https://{{ httpbin_host }}/get"'
- name: redirect to bad SSL site
uri:
url: 'http://{{ badssl_host }}'
register: result
ignore_errors: true
- name: Ensure bad SSL site reidrect fails
assert:
that:
- result is failed
- 'badssl_host in result.msg'
- name: test basic auth
uri:
url: 'https://{{ httpbin_host }}/basic-auth/user/passwd'
user: user
password: passwd
- name: test basic forced auth
uri:
url: 'https://{{ httpbin_host }}/hidden-basic-auth/user/passwd'
force_basic_auth: true
user: user
password: passwd
- name: test digest auth
uri:
url: 'https://{{ httpbin_host }}/digest-auth/auth/user/passwd'
user: user
password: passwd
headers:
Cookie: "fake=fake_value"
- name: test PUT
uri:
url: 'https://{{ httpbin_host }}/put'
method: PUT
body: 'foo=bar'
- name: test OPTIONS
uri:
url: 'https://{{ httpbin_host }}/'
method: OPTIONS
register: result
- name: Assert we got an allow header
assert:
that:
- 'result.allow.split(", ")|sort == ["GET", "HEAD", "OPTIONS"]'
# Ubuntu12.04 doesn't have python-urllib3, this makes handling required dependencies a pain across all variations
# We'll use this to just skip 12.04 on those tests. We should be sufficiently covered with other OSes and versions
- name: Set fact if running on Ubuntu 12.04
set_fact:
is_ubuntu_precise: "{{ ansible_distribution == 'Ubuntu' and ansible_distribution_release == 'precise' }}"
- name: Test that SNI succeeds on python versions that have SNI
uri:
url: 'https://{{ sni_host }}/'
return_content: true
when: ansible_python.has_sslcontext
register: result
- name: Assert SNI verification succeeds on new python
assert:
that:
- result is successful
- 'sni_host in result.content'
when: ansible_python.has_sslcontext
- name: Verify SNI verification fails on old python without urllib3 contrib
uri:
url: 'https://{{ sni_host }}'
ignore_errors: true
when: not ansible_python.has_sslcontext
register: result
- name: Assert SNI verification fails on old python
assert:
that:
- result is failed
when: result is not skipped
- name: check if urllib3 is installed as an OS package
package:
name: "{{ uri_os_packages[ansible_os_family].urllib3 }}"
check_mode: yes
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool and uri_os_packages[ansible_os_family].urllib3|default
register: urllib3
- name: uninstall conflicting urllib3 pip package
pip:
name: urllib3
state: absent
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool and uri_os_packages[ansible_os_family].urllib3|default and urllib3.changed
- name: install OS packages that are needed for SNI on old python
package:
name: "{{ item }}"
with_items: "{{ uri_os_packages[ansible_os_family].step1 | default([]) }}"
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: install python modules for Older Python SNI verification
pip:
name: "{{ item }}"
with_items:
- ndg-httpsclient
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: Verify SNI verification succeeds on old python with urllib3 contrib
uri:
url: 'https://{{ sni_host }}'
return_content: true
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
register: result
- name: Assert SNI verification succeeds on old python
assert:
that:
- result is successful
- 'sni_host in result.content'
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: Uninstall ndg-httpsclient
pip:
name: "{{ item }}"
state: absent
with_items:
- ndg-httpsclient
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: uninstall OS packages that are needed for SNI on old python
package:
name: "{{ item }}"
state: absent
with_items: "{{ uri_os_packages[ansible_os_family].step1 | default([]) }}"
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: install OS packages that are needed for building cryptography
package:
name: "{{ item }}"
with_items: "{{ uri_os_packages[ansible_os_family].step2 | default([]) }}"
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: install urllib3 and pyopenssl via pip
pip:
name: "{{ item }}"
state: latest
extra_args: "-c {{ remote_constraints }}"
with_items:
- urllib3
- PyOpenSSL
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: Verify SNI verification succeeds on old python with pip urllib3 contrib
uri:
url: 'https://{{ sni_host }}'
return_content: true
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
register: result
- name: Assert SNI verification succeeds on old python with pip urllib3 contrib
assert:
that:
- result is successful
- 'sni_host in result.content'
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: Uninstall urllib3 and PyOpenSSL
pip:
name: "{{ item }}"
state: absent
with_items:
- urllib3
- PyOpenSSL
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: validate the status_codes are correct
uri:
url: "https://{{ httpbin_host }}/status/202"
status_code: 202
method: POST
body: foo
- name: Validate body_format json does not override content-type in 2.3 or newer
uri:
url: "https://{{ httpbin_host }}/post"
method: POST
body:
foo: bar
body_format: json
headers:
'Content-Type': 'text/json'
return_content: true
register: result
failed_when: result.json.headers['Content-Type'] != 'text/json'
- name: Validate body_format form-urlencoded using dicts works
uri:
url: https://{{ httpbin_host }}/post
method: POST
body:
user: foo
password: bar!#@ |&82$M
submit: Sign in
body_format: form-urlencoded
return_content: yes
register: result
- name: Assert form-urlencoded dict input
assert:
that:
- result is successful
- result.json.headers['Content-Type'] == 'application/x-www-form-urlencoded'
- result.json.form.password == 'bar!#@ |&82$M'
- name: Validate body_format form-urlencoded using lists works
uri:
url: https://{{ httpbin_host }}/post
method: POST
body:
- [ user, foo ]
- [ password, bar!#@ |&82$M ]
- [ submit, Sign in ]
body_format: form-urlencoded
return_content: yes
register: result
- name: Assert form-urlencoded list input
assert:
that:
- result is successful
- result.json.headers['Content-Type'] == 'application/x-www-form-urlencoded'
- result.json.form.password == 'bar!#@ |&82$M'
- name: Validate body_format form-urlencoded of invalid input fails
uri:
url: https://{{ httpbin_host }}/post
method: POST
body:
- foo
- bar: baz
body_format: form-urlencoded
return_content: yes
register: result
ignore_errors: yes
- name: Assert invalid input fails
assert:
that:
- result is failure
- "'failed to parse body as form_urlencoded: too many values to unpack' in result.msg"
- name: multipart/form-data
uri:
url: https://{{ httpbin_host }}/post
method: POST
body_format: form-multipart
body:
file1:
filename: formdata.txt
file2:
content: text based file content
filename: fake.txt
mime_type: text/plain
text_form_field1: value1
text_form_field2:
content: value2
mime_type: text/plain
register: multipart
- name: Assert multipart/form-data
assert:
that:
- multipart.json.files.file1 == '_multipart/form-data_\n'
- multipart.json.files.file2 == 'text based file content'
- multipart.json.form.text_form_field1 == 'value1'
- multipart.json.form.text_form_field2 == 'value2'
- name: Validate invalid method
uri:
url: https://{{ httpbin_host }}/anything
method: UNKNOWN
register: result
ignore_errors: yes
- name: Assert invalid method fails
assert:
that:
- result is failure
- result.status == 405
- "'METHOD NOT ALLOWED' in result.msg"
- name: Test client cert auth, no certs
uri:
url: "https://ansible.http.tests/ssl_client_verify"
status_code: 200
return_content: true
register: result
failed_when: result.content != "ansible.http.tests:NONE"
when: has_httptester
- name: Test client cert auth, with certs
uri:
url: "https://ansible.http.tests/ssl_client_verify"
client_cert: "{{ remote_tmp_dir }}/client.pem"
client_key: "{{ remote_tmp_dir }}/client.key"
return_content: true
register: result
failed_when: result.content != "ansible.http.tests:SUCCESS"
when: has_httptester
- name: Test client cert auth, with no validation
uri:
url: "https://fail.ansible.http.tests/ssl_client_verify"
client_cert: "{{ remote_tmp_dir }}/client.pem"
client_key: "{{ remote_tmp_dir }}/client.key"
return_content: true
validate_certs: no
register: result
failed_when: result.content != "ansible.http.tests:SUCCESS"
when: has_httptester
- name: Test client cert auth, with validation and ssl mismatch
uri:
url: "https://fail.ansible.http.tests/ssl_client_verify"
client_cert: "{{ remote_tmp_dir }}/client.pem"
client_key: "{{ remote_tmp_dir }}/client.key"
return_content: true
validate_certs: yes
register: result
failed_when: result is not failed
when: has_httptester
- uri:
url: https://{{ httpbin_host }}/response-headers?Set-Cookie=Foo%3Dbar&Set-Cookie=Baz%3Dqux
register: result
- assert:
that:
- result['set_cookie'] == 'Foo=bar, Baz=qux'
# Python sorts cookies in order of most specific (ie. longest) path first
# items with the same path are reversed from response order
- result['cookies_string'] == 'Baz=qux; Foo=bar'
- name: Write out netrc template
template:
src: netrc.j2
dest: "{{ remote_tmp_dir }}/netrc"
- name: Test netrc with port
uri:
url: "https://{{ httpbin_host }}:443/basic-auth/user/passwd"
environment:
NETRC: "{{ remote_tmp_dir }}/netrc"
- name: Test JSON POST with src
uri:
url: "https://{{ httpbin_host}}/post"
src: pass0.json
method: POST
return_content: true
body_format: json
register: result
- name: Validate POST with src works
assert:
that:
- result.json.json[0] == 'JSON Test Pattern pass1'
- name: Copy file pass0.json to remote
copy:
src: "{{ role_path }}/files/pass0.json"
dest: "{{ remote_tmp_dir }}/pass0.json"
- name: Test JSON POST with src and remote_src=True
uri:
url: "https://{{ httpbin_host}}/post"
src: "{{ remote_tmp_dir }}/pass0.json"
remote_src: true
method: POST
return_content: true
body_format: json
register: result
- name: Validate POST with src and remote_src=True works
assert:
that:
- result.json.json[0] == 'JSON Test Pattern pass1'
- name: Create a testing file
copy:
content: "content"
dest: "{{ output_dir }}/output"
- name: Download a file from non existing location
uri:
url: http://does/not/exist
dest: "{{ output_dir }}/output"
ignore_errors: yes
- name: Save testing file's output
command: "cat {{ output_dir }}/output"
register: file_out
- name: Test the testing file was not overwritten
assert:
that:
- "'content' in file_out.stdout"
- name: Clean up
file:
dest: "{{ output_dir }}/output"
state: absent
- name: Test follow_redirects=none
import_tasks: redirect-none.yml
- name: Test follow_redirects=safe
import_tasks: redirect-safe.yml
- name: Test follow_redirects=urllib2
import_tasks: redirect-urllib2.yml
- name: Test follow_redirects=all
import_tasks: redirect-all.yml
- name: Check unexpected failures
import_tasks: unexpected-failures.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,400 |
uri module set string with masked content into content and json output
|
##### SUMMARY
uri module set string with masked content into content and json output
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
uri
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0
config file = None
configured module search path = ['/Users/hungluong/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/hungluong/Library/Python/3.7/lib/python/site-packages/ansible
executable location = /Users/hungluong/Library/Python/3.7/bin/ansible
python version = 3.7.6 (default, Dec 30 2019, 19:38:26) [Clang 11.0.0 (clang-1100.0.33.16)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
N/A
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
connection: local
tasks:
- name: send request
uri:
url: "https://postman-echo.com/get?name=something-with-admin"
user: admin
password: admin
method: GET
force_basic_auth: yes
return_content: yes
status_code: 200
register: response
- name: extract value
vars:
query: args.name
set_fact:
value_content: "{{ response.content }}"
value_content_parsed: "{{ response.content | from_json | json_query(query) }}"
value_json: "{{ response.json.args.name }}"
- name: debug
debug:
msg:
- "{{ 'something-with-admin' in value_json }}"
- "{{ 'something-with-admin' in value_content }}"
- "{{ 'something-with-admin' in value_content_parsed }}"
- "{{ 'something-with-********' in value_json }}"
- "{{ 'something-with-********' in value_content }}"
- "{{ 'something-with-********' in value_content_parsed }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
The module should return the json/content value with the correct values
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The module seems to apply sensitive info masking ('********') to value matching username/password in its output
<!--- Paste verbatim command output between quotes -->
```paste below
"msg": [
false,
false,
false,
true,
false,
true
]
```
|
https://github.com/ansible/ansible/issues/68400
|
https://github.com/ansible/ansible/pull/69653
|
cfd301a586302785fa888117deaf06955a240cdd
|
e0f25a2b1f9e6c21f751ba0ed2dc2eee2152983e
| 2020-03-23T11:01:05Z |
python
| 2020-05-21T20:17:57Z |
test/units/module_utils/basic/test_no_log.py
|
# -*- coding: utf-8 -*-
# (c) 2015, Toshio Kuratomi <[email protected]>
# (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from units.compat import unittest
from ansible.module_utils.basic import remove_values
from ansible.module_utils.common.parameters import _return_datastructure_name
class TestReturnValues(unittest.TestCase):
dataset = (
('string', frozenset(['string'])),
('', frozenset()),
(1, frozenset(['1'])),
(1.0, frozenset(['1.0'])),
(False, frozenset()),
(['1', '2', '3'], frozenset(['1', '2', '3'])),
(('1', '2', '3'), frozenset(['1', '2', '3'])),
({'one': 1, 'two': 'dos'}, frozenset(['1', 'dos'])),
(
{
'one': 1,
'two': 'dos',
'three': [
'amigos', 'musketeers', None, {
'ping': 'pong',
'base': (
'balls', 'raquets'
)
}
]
},
frozenset(['1', 'dos', 'amigos', 'musketeers', 'pong', 'balls', 'raquets'])
),
(u'Toshio くらとみ', frozenset(['Toshio くらとみ'])),
('Toshio くらとみ', frozenset(['Toshio くらとみ'])),
)
def test_return_datastructure_name(self):
for data, expected in self.dataset:
self.assertEqual(frozenset(_return_datastructure_name(data)), expected)
def test_unknown_type(self):
self.assertRaises(TypeError, frozenset, _return_datastructure_name(object()))
class TestRemoveValues(unittest.TestCase):
OMIT = 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
dataset_no_remove = (
('string', frozenset(['nope'])),
(1234, frozenset(['4321'])),
(False, frozenset(['4321'])),
(1.0, frozenset(['4321'])),
(['string', 'strang', 'strung'], frozenset(['nope'])),
({'one': 1, 'two': 'dos', 'secret': 'key'}, frozenset(['nope'])),
(
{
'one': 1,
'two': 'dos',
'three': [
'amigos', 'musketeers', None, {
'ping': 'pong', 'base': ['balls', 'raquets']
}
]
},
frozenset(['nope'])
),
(u'Toshio くら'.encode('utf-8'), frozenset([u'とみ'.encode('utf-8')])),
(u'Toshio くら', frozenset([u'とみ'])),
)
dataset_remove = (
('string', frozenset(['string']), OMIT),
(1234, frozenset(['1234']), OMIT),
(1234, frozenset(['23']), OMIT),
(1.0, frozenset(['1.0']), OMIT),
(['string', 'strang', 'strung'], frozenset(['strang']), ['string', OMIT, 'strung']),
(['string', 'strang', 'strung'], frozenset(['strang', 'string', 'strung']), [OMIT, OMIT, OMIT]),
(('string', 'strang', 'strung'), frozenset(['string', 'strung']), [OMIT, 'strang', OMIT]),
((1234567890, 345678, 987654321), frozenset(['1234567890']), [OMIT, 345678, 987654321]),
((1234567890, 345678, 987654321), frozenset(['345678']), [OMIT, OMIT, 987654321]),
({'one': 1, 'two': 'dos', 'secret': 'key'}, frozenset(['key']), {'one': 1, 'two': 'dos', 'secret': OMIT}),
({'one': 1, 'two': 'dos', 'secret': 'key'}, frozenset(['key', 'dos', '1']), {'one': OMIT, 'two': OMIT, 'secret': OMIT}),
({'one': 1, 'two': 'dos', 'secret': 'key'}, frozenset(['key', 'dos', '1']), {'one': OMIT, 'two': OMIT, 'secret': OMIT}),
(
{
'one': 1,
'two': 'dos',
'three': [
'amigos', 'musketeers', None, {
'ping': 'pong', 'base': [
'balls', 'raquets'
]
}
]
},
frozenset(['balls', 'base', 'pong', 'amigos']),
{
'one': 1,
'two': 'dos',
'three': [
OMIT, 'musketeers', None, {
'ping': OMIT,
'base': [
OMIT, 'raquets'
]
}
]
}
),
(
'This sentence has an enigma wrapped in a mystery inside of a secret. - mr mystery',
frozenset(['enigma', 'mystery', 'secret']),
'This sentence has an ******** wrapped in a ******** inside of a ********. - mr ********'
),
(u'Toshio くらとみ'.encode('utf-8'), frozenset([u'くらとみ'.encode('utf-8')]), u'Toshio ********'.encode('utf-8')),
(u'Toshio くらとみ', frozenset([u'くらとみ']), u'Toshio ********'),
)
def test_no_removal(self):
for value, no_log_strings in self.dataset_no_remove:
self.assertEqual(remove_values(value, no_log_strings), value)
def test_strings_to_remove(self):
for value, no_log_strings, expected in self.dataset_remove:
self.assertEqual(remove_values(value, no_log_strings), expected)
def test_unknown_type(self):
self.assertRaises(TypeError, remove_values, object(), frozenset())
def test_hit_recursion_limit(self):
""" Check that we do not hit a recursion limit"""
data_list = []
inner_list = data_list
for i in range(0, 10000):
new_list = []
inner_list.append(new_list)
inner_list = new_list
inner_list.append('secret')
# Check that this does not hit a recursion limit
actual_data_list = remove_values(data_list, frozenset(('secret',)))
levels = 0
inner_list = actual_data_list
while inner_list:
if isinstance(inner_list, list):
self.assertEqual(len(inner_list), 1)
else:
levels -= 1
break
inner_list = inner_list[0]
levels += 1
self.assertEqual(inner_list, self.OMIT)
self.assertEqual(levels, 10000)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,171 |
file module no longer returning state=absent for missing files
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`file` is "stableinterface" and stopped setting state=absent in its return value. This broke real-world ansible roles. It works with ansible-2.7, broken in 2.8. There was no deprecation warning in 2.7 about the impending breakage.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
file
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.6
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
RHEL 7
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```bash
ansible localhost -m file -a owner=root\ group=root\ path=/root/nosuchdir
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```paste below
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
localhost | FAILED! => {
"changed": false,
"msg": "file (/root/nosuchdir) is absent, cannot continue",
"path": "/root/nosuchdir",
"state": "absent"
}
```
(as on ansible 2.7):
```
ansible 2.7.10
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
localhost | FAILED! => {
"changed": false,
"msg": "file (/root/nosuchdir) is absent, cannot continue",
"path": "/root/nosuchdir"
}
```
|
https://github.com/ansible/ansible/issues/66171
|
https://github.com/ansible/ansible/pull/66503
|
e0f25a2b1f9e6c21f751ba0ed2dc2eee2152983e
|
cd8920af998e297a549a4f05cf4a4b3656d7d67e
| 2020-01-03T07:07:34Z |
python
| 2020-05-21T20:35:45Z |
changelogs/fragments/file-return-state-when-file-does-not-exist.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,171 |
file module no longer returning state=absent for missing files
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`file` is "stableinterface" and stopped setting state=absent in its return value. This broke real-world ansible roles. It works with ansible-2.7, broken in 2.8. There was no deprecation warning in 2.7 about the impending breakage.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
file
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.6
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
RHEL 7
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```bash
ansible localhost -m file -a owner=root\ group=root\ path=/root/nosuchdir
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```paste below
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
localhost | FAILED! => {
"changed": false,
"msg": "file (/root/nosuchdir) is absent, cannot continue",
"path": "/root/nosuchdir",
"state": "absent"
}
```
(as on ansible 2.7):
```
ansible 2.7.10
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
localhost | FAILED! => {
"changed": false,
"msg": "file (/root/nosuchdir) is absent, cannot continue",
"path": "/root/nosuchdir"
}
```
|
https://github.com/ansible/ansible/issues/66171
|
https://github.com/ansible/ansible/pull/66503
|
e0f25a2b1f9e6c21f751ba0ed2dc2eee2152983e
|
cd8920af998e297a549a4f05cf4a4b3656d7d67e
| 2020-01-03T07:07:34Z |
python
| 2020-05-21T20:35:45Z |
lib/ansible/modules/file.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'core'}
DOCUMENTATION = r'''
---
module: file
version_added: historical
short_description: Manage files and file properties
extends_documentation_fragment: files
description:
- Set attributes of files, symlinks or directories.
- Alternatively, remove files, symlinks or directories.
- Many other modules support the same options as the C(file) module - including M(copy), M(template), and M(assemble).
- For Windows targets, use the M(win_file) module instead.
options:
path:
description:
- Path to the file being managed.
type: path
required: yes
aliases: [ dest, name ]
state:
description:
- If C(absent), directories will be recursively deleted, and files or symlinks will
be unlinked. In the case of a directory, if C(diff) is declared, you will see the files and folders deleted listed
under C(path_contents). Note that C(absent) will not cause C(file) to fail if the C(path) does
not exist as the state did not change.
- If C(directory), all intermediate subdirectories will be created if they
do not exist. Since Ansible 1.7 they will be created with the supplied permissions.
- If C(file), without any other options this works mostly as a 'stat' and will return the current state of C(path).
Even with other options (i.e C(mode)), the file will be modified but will NOT be created if it does not exist;
see the C(touch) value or the M(copy) or M(template) module if you want that behavior.
- If C(hard), the hard link will be created or changed.
- If C(link), the symbolic link will be created or changed.
- If C(touch) (new in 1.4), an empty file will be created if the C(path) does not
exist, while an existing file or directory will receive updated file access and
modification times (similar to the way C(touch) works from the command line).
type: str
default: file
choices: [ absent, directory, file, hard, link, touch ]
src:
description:
- Path of the file to link to.
- This applies only to C(state=link) and C(state=hard).
- For C(state=link), this will also accept a non-existing path.
- Relative paths are relative to the file being created (C(path)) which is how
the Unix command C(ln -s SRC DEST) treats relative paths.
type: path
recurse:
description:
- Recursively set the specified file attributes on directory contents.
- This applies only when C(state) is set to C(directory).
type: bool
default: no
version_added: '1.1'
force:
description:
- >
Force the creation of the symlinks in two cases: the source file does
not exist (but will appear later); the destination exists and is a file (so, we need to unlink the
C(path) file and create symlink to the C(src) file in place of it).
type: bool
default: no
follow:
description:
- This flag indicates that filesystem links, if they exist, should be followed.
- Previous to Ansible 2.5, this was C(no) by default.
type: bool
default: yes
version_added: '1.8'
modification_time:
description:
- This parameter indicates the time the file's modification time should be set to.
- Should be C(preserve) when no modification is required, C(YYYYMMDDHHMM.SS) when using default time format, or C(now).
- Default is None meaning that C(preserve) is the default for C(state=[file,directory,link,hard]) and C(now) is default for C(state=touch).
type: str
version_added: "2.7"
modification_time_format:
description:
- When used with C(modification_time), indicates the time format that must be used.
- Based on default Python format (see time.strftime doc).
type: str
default: "%Y%m%d%H%M.%S"
version_added: '2.7'
access_time:
description:
- This parameter indicates the time the file's access time should be set to.
- Should be C(preserve) when no modification is required, C(YYYYMMDDHHMM.SS) when using default time format, or C(now).
- Default is C(None) meaning that C(preserve) is the default for C(state=[file,directory,link,hard]) and C(now) is default for C(state=touch).
type: str
version_added: '2.7'
access_time_format:
description:
- When used with C(access_time), indicates the time format that must be used.
- Based on default Python format (see time.strftime doc).
type: str
default: "%Y%m%d%H%M.%S"
version_added: '2.7'
seealso:
- module: assemble
- module: copy
- module: stat
- module: template
- module: win_file
author:
- Ansible Core Team
- Michael DeHaan
'''
EXAMPLES = r'''
- name: Change file ownership, group and permissions
file:
path: /etc/foo.conf
owner: foo
group: foo
mode: '0644'
- name: Give insecure permissions to an existing file
file:
path: /work
owner: root
group: root
mode: '1777'
- name: Create a symbolic link
file:
src: /file/to/link/to
dest: /path/to/symlink
owner: foo
group: foo
state: link
- name: Create two hard links
file:
src: '/tmp/{{ item.src }}'
dest: '{{ item.dest }}'
state: hard
loop:
- { src: x, dest: y }
- { src: z, dest: k }
- name: Touch a file, using symbolic modes to set the permissions (equivalent to 0644)
file:
path: /etc/foo.conf
state: touch
mode: u=rw,g=r,o=r
- name: Touch the same file, but add/remove some permissions
file:
path: /etc/foo.conf
state: touch
mode: u+rw,g-wx,o-rwx
- name: Touch again the same file, but dont change times this makes the task idempotent
file:
path: /etc/foo.conf
state: touch
mode: u+rw,g-wx,o-rwx
modification_time: preserve
access_time: preserve
- name: Create a directory if it does not exist
file:
path: /etc/some_directory
state: directory
mode: '0755'
- name: Update modification and access time of given file
file:
path: /etc/some_file
state: file
modification_time: now
access_time: now
- name: Set access time based on seconds from epoch value
file:
path: /etc/another_file
state: file
access_time: '{{ "%Y%m%d%H%M.%S" | strftime(stat_var.stat.atime) }}'
- name: Recursively change ownership of a directory
file:
path: /etc/foo
state: directory
recurse: yes
owner: foo
group: foo
- name: Remove file (delete file)
file:
path: /etc/foo.txt
state: absent
- name: Recursively remove directory
file:
path: /etc/foo
state: absent
'''
RETURN = r'''
'''
import errno
import os
import shutil
import sys
import time
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_bytes, to_native
# There will only be a single AnsibleModule object per module
module = None
class AnsibleModuleError(Exception):
def __init__(self, results):
self.results = results
def __repr__(self):
print('AnsibleModuleError(results={0})'.format(self.results))
class ParameterError(AnsibleModuleError):
pass
class Sentinel(object):
def __new__(cls, *args, **kwargs):
return cls
def _ansible_excepthook(exc_type, exc_value, tb):
# Using an exception allows us to catch it if the calling code knows it can recover
if issubclass(exc_type, AnsibleModuleError):
module.fail_json(**exc_value.results)
else:
sys.__excepthook__(exc_type, exc_value, tb)
def additional_parameter_handling(params):
"""Additional parameter validation and reformatting"""
# When path is a directory, rewrite the pathname to be the file inside of the directory
# TODO: Why do we exclude link? Why don't we exclude directory? Should we exclude touch?
# I think this is where we want to be in the future:
# when isdir(path):
# if state == absent: Remove the directory
# if state == touch: Touch the directory
# if state == directory: Assert the directory is the same as the one specified
# if state == file: place inside of the directory (use _original_basename)
# if state == link: place inside of the directory (use _original_basename. Fallback to src?)
# if state == hard: place inside of the directory (use _original_basename. Fallback to src?)
if (params['state'] not in ("link", "absent") and os.path.isdir(to_bytes(params['path'], errors='surrogate_or_strict'))):
basename = None
if params['_original_basename']:
basename = params['_original_basename']
elif params['src']:
basename = os.path.basename(params['src'])
if basename:
params['path'] = os.path.join(params['path'], basename)
# state should default to file, but since that creates many conflicts,
# default state to 'current' when it exists.
prev_state = get_state(to_bytes(params['path'], errors='surrogate_or_strict'))
if params['state'] is None:
if prev_state != 'absent':
params['state'] = prev_state
elif params['recurse']:
params['state'] = 'directory'
else:
params['state'] = 'file'
# make sure the target path is a directory when we're doing a recursive operation
if params['recurse'] and params['state'] != 'directory':
raise ParameterError(results={"msg": "recurse option requires state to be 'directory'",
"path": params["path"]})
# Fail if 'src' but no 'state' is specified
if params['src'] and params['state'] not in ('link', 'hard'):
raise ParameterError(results={'msg': "src option requires state to be 'link' or 'hard'",
'path': params['path']})
def get_state(path):
''' Find out current state '''
b_path = to_bytes(path, errors='surrogate_or_strict')
try:
if os.path.lexists(b_path):
if os.path.islink(b_path):
return 'link'
elif os.path.isdir(b_path):
return 'directory'
elif os.stat(b_path).st_nlink > 1:
return 'hard'
# could be many other things, but defaulting to file
return 'file'
return 'absent'
except OSError as e:
if e.errno == errno.ENOENT: # It may already have been removed
return 'absent'
else:
raise
# This should be moved into the common file utilities
def recursive_set_attributes(b_path, follow, file_args, mtime, atime):
changed = False
try:
for b_root, b_dirs, b_files in os.walk(b_path):
for b_fsobj in b_dirs + b_files:
b_fsname = os.path.join(b_root, b_fsobj)
if not os.path.islink(b_fsname):
tmp_file_args = file_args.copy()
tmp_file_args['path'] = to_native(b_fsname, errors='surrogate_or_strict')
changed |= module.set_fs_attributes_if_different(tmp_file_args, changed, expand=False)
changed |= update_timestamp_for_file(tmp_file_args['path'], mtime, atime)
else:
# Change perms on the link
tmp_file_args = file_args.copy()
tmp_file_args['path'] = to_native(b_fsname, errors='surrogate_or_strict')
changed |= module.set_fs_attributes_if_different(tmp_file_args, changed, expand=False)
changed |= update_timestamp_for_file(tmp_file_args['path'], mtime, atime)
if follow:
b_fsname = os.path.join(b_root, os.readlink(b_fsname))
# The link target could be nonexistent
if os.path.exists(b_fsname):
if os.path.isdir(b_fsname):
# Link is a directory so change perms on the directory's contents
changed |= recursive_set_attributes(b_fsname, follow, file_args, mtime, atime)
# Change perms on the file pointed to by the link
tmp_file_args = file_args.copy()
tmp_file_args['path'] = to_native(b_fsname, errors='surrogate_or_strict')
changed |= module.set_fs_attributes_if_different(tmp_file_args, changed, expand=False)
changed |= update_timestamp_for_file(tmp_file_args['path'], mtime, atime)
except RuntimeError as e:
# on Python3 "RecursionError" is raised which is derived from "RuntimeError"
# TODO once this function is moved into the common file utilities, this should probably raise more general exception
raise AnsibleModuleError(
results={'msg': "Could not recursively set attributes on %s. Original error was: '%s'" % (to_native(b_path), to_native(e))}
)
return changed
def initial_diff(path, state, prev_state):
diff = {'before': {'path': path},
'after': {'path': path},
}
if prev_state != state:
diff['before']['state'] = prev_state
diff['after']['state'] = state
if state == 'absent' and prev_state == 'directory':
walklist = {
'directories': [],
'files': [],
}
b_path = to_bytes(path, errors='surrogate_or_strict')
for base_path, sub_folders, files in os.walk(b_path):
for folder in sub_folders:
folderpath = os.path.join(base_path, folder)
walklist['directories'].append(folderpath)
for filename in files:
filepath = os.path.join(base_path, filename)
walklist['files'].append(filepath)
diff['before']['path_content'] = walklist
return diff
#
# States
#
def get_timestamp_for_time(formatted_time, time_format):
if formatted_time == 'preserve':
return None
elif formatted_time == 'now':
return Sentinel
else:
try:
struct = time.strptime(formatted_time, time_format)
struct_time = time.mktime(struct)
except (ValueError, OverflowError) as e:
raise AnsibleModuleError(results={'msg': 'Error while obtaining timestamp for time %s using format %s: %s'
% (formatted_time, time_format, to_native(e, nonstring='simplerepr'))})
return struct_time
def update_timestamp_for_file(path, mtime, atime, diff=None):
b_path = to_bytes(path, errors='surrogate_or_strict')
try:
# When mtime and atime are set to 'now', rely on utime(path, None) which does not require ownership of the file
# https://github.com/ansible/ansible/issues/50943
if mtime is Sentinel and atime is Sentinel:
# It's not exact but we can't rely on os.stat(path).st_mtime after setting os.utime(path, None) as it may
# not be updated. Just use the current time for the diff values
mtime = atime = time.time()
previous_mtime = os.stat(b_path).st_mtime
previous_atime = os.stat(b_path).st_atime
set_time = None
else:
# If both parameters are None 'preserve', nothing to do
if mtime is None and atime is None:
return False
previous_mtime = os.stat(b_path).st_mtime
previous_atime = os.stat(b_path).st_atime
if mtime is None:
mtime = previous_mtime
elif mtime is Sentinel:
mtime = time.time()
if atime is None:
atime = previous_atime
elif atime is Sentinel:
atime = time.time()
# If both timestamps are already ok, nothing to do
if mtime == previous_mtime and atime == previous_atime:
return False
set_time = (atime, mtime)
os.utime(b_path, set_time)
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
if 'after' not in diff:
diff['after'] = {}
if mtime != previous_mtime:
diff['before']['mtime'] = previous_mtime
diff['after']['mtime'] = mtime
if atime != previous_atime:
diff['before']['atime'] = previous_atime
diff['after']['atime'] = atime
except OSError as e:
raise AnsibleModuleError(results={'msg': 'Error while updating modification or access time: %s'
% to_native(e, nonstring='simplerepr'), 'path': path})
return True
def keep_backward_compatibility_on_timestamps(parameter, state):
if state in ['file', 'hard', 'directory', 'link'] and parameter is None:
return 'preserve'
elif state == 'touch' and parameter is None:
return 'now'
else:
return parameter
def execute_diff_peek(path):
"""Take a guess as to whether a file is a binary file"""
b_path = to_bytes(path, errors='surrogate_or_strict')
appears_binary = False
try:
with open(b_path, 'rb') as f:
head = f.read(8192)
except Exception:
# If we can't read the file, we're okay assuming it's text
pass
else:
if b"\x00" in head:
appears_binary = True
return appears_binary
def ensure_absent(path):
b_path = to_bytes(path, errors='surrogate_or_strict')
prev_state = get_state(b_path)
result = {}
if prev_state != 'absent':
diff = initial_diff(path, 'absent', prev_state)
if not module.check_mode:
if prev_state == 'directory':
try:
shutil.rmtree(b_path, ignore_errors=False)
except Exception as e:
raise AnsibleModuleError(results={'msg': "rmtree failed: %s" % to_native(e)})
else:
try:
os.unlink(b_path)
except OSError as e:
if e.errno != errno.ENOENT: # It may already have been removed
raise AnsibleModuleError(results={'msg': "unlinking failed: %s " % to_native(e),
'path': path})
result.update({'path': path, 'changed': True, 'diff': diff, 'state': 'absent'})
else:
result.update({'path': path, 'changed': False, 'state': 'absent'})
return result
def execute_touch(path, follow, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
prev_state = get_state(b_path)
changed = False
result = {'dest': path}
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
if not module.check_mode:
if prev_state == 'absent':
# Create an empty file if the filename did not already exist
try:
open(b_path, 'wb').close()
changed = True
except (OSError, IOError) as e:
raise AnsibleModuleError(results={'msg': 'Error, could not touch target: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
# Update the attributes on the file
diff = initial_diff(path, 'touch', prev_state)
file_args = module.load_file_common_arguments(module.params)
try:
changed = module.set_fs_attributes_if_different(file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
except SystemExit as e:
if e.code:
# We take this to mean that fail_json() was called from
# somewhere in basic.py
if prev_state == 'absent':
# If we just created the file we can safely remove it
os.remove(b_path)
raise
result['changed'] = changed
result['diff'] = diff
return result
def ensure_file_attributes(path, follow, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
prev_state = get_state(b_path)
file_args = module.load_file_common_arguments(module.params)
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
if prev_state != 'file':
if follow and prev_state == 'link':
# follow symlink and operate on original
b_path = os.path.realpath(b_path)
path = to_native(b_path, errors='strict')
prev_state = get_state(b_path)
file_args['path'] = path
if prev_state not in ('file', 'hard'):
# file is not absent and any other state is a conflict
raise AnsibleModuleError(results={'msg': 'file (%s) is %s, cannot continue' % (path, prev_state),
'path': path})
diff = initial_diff(path, 'file', prev_state)
changed = module.set_fs_attributes_if_different(file_args, False, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
return {'path': path, 'changed': changed, 'diff': diff}
def ensure_directory(path, follow, recurse, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
prev_state = get_state(b_path)
file_args = module.load_file_common_arguments(module.params)
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
# For followed symlinks, we need to operate on the target of the link
if follow and prev_state == 'link':
b_path = os.path.realpath(b_path)
path = to_native(b_path, errors='strict')
file_args['path'] = path
prev_state = get_state(b_path)
changed = False
diff = initial_diff(path, 'directory', prev_state)
if prev_state == 'absent':
# Create directory and assign permissions to it
if module.check_mode:
return {'changed': True, 'diff': diff}
curpath = ''
try:
# Split the path so we can apply filesystem attributes recursively
# from the root (/) directory for absolute paths or the base path
# of a relative path. We can then walk the appropriate directory
# path to apply attributes.
# Something like mkdir -p with mode applied to all of the newly created directories
for dirname in path.strip('/').split('/'):
curpath = '/'.join([curpath, dirname])
# Remove leading slash if we're creating a relative path
if not os.path.isabs(path):
curpath = curpath.lstrip('/')
b_curpath = to_bytes(curpath, errors='surrogate_or_strict')
if not os.path.exists(b_curpath):
try:
os.mkdir(b_curpath)
changed = True
except OSError as ex:
# Possibly something else created the dir since the os.path.exists
# check above. As long as it's a dir, we don't need to error out.
if not (ex.errno == errno.EEXIST and os.path.isdir(b_curpath)):
raise
tmp_file_args = file_args.copy()
tmp_file_args['path'] = curpath
changed = module.set_fs_attributes_if_different(tmp_file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
except Exception as e:
raise AnsibleModuleError(results={'msg': 'There was an issue creating %s as requested:'
' %s' % (curpath, to_native(e)),
'path': path})
return {'path': path, 'changed': changed, 'diff': diff}
elif prev_state != 'directory':
# We already know prev_state is not 'absent', therefore it exists in some form.
raise AnsibleModuleError(results={'msg': '%s already exists as a %s' % (path, prev_state),
'path': path})
#
# previous state == directory
#
changed = module.set_fs_attributes_if_different(file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
if recurse:
changed |= recursive_set_attributes(b_path, follow, file_args, mtime, atime)
return {'path': path, 'changed': changed, 'diff': diff}
def ensure_symlink(path, src, follow, force, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
b_src = to_bytes(src, errors='surrogate_or_strict')
prev_state = get_state(b_path)
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
# source is both the source of a symlink or an informational passing of the src for a template module
# or copy module, even if this module never uses it, it is needed to key off some things
if src is None:
if follow:
# use the current target of the link as the source
src = to_native(os.path.realpath(b_path), errors='strict')
b_src = to_bytes(src, errors='surrogate_or_strict')
if not os.path.islink(b_path) and os.path.isdir(b_path):
relpath = path
else:
b_relpath = os.path.dirname(b_path)
relpath = to_native(b_relpath, errors='strict')
absrc = os.path.join(relpath, src)
b_absrc = to_bytes(absrc, errors='surrogate_or_strict')
if not force and not os.path.exists(b_absrc):
raise AnsibleModuleError(results={'msg': 'src file does not exist, use "force=yes" if you'
' really want to create the link: %s' % absrc,
'path': path, 'src': src})
if prev_state == 'directory':
if not force:
raise AnsibleModuleError(results={'msg': 'refusing to convert from %s to symlink for %s'
% (prev_state, path),
'path': path})
elif os.listdir(b_path):
# refuse to replace a directory that has files in it
raise AnsibleModuleError(results={'msg': 'the directory %s is not empty, refusing to'
' convert it' % path,
'path': path})
elif prev_state in ('file', 'hard') and not force:
raise AnsibleModuleError(results={'msg': 'refusing to convert from %s to symlink for %s'
% (prev_state, path),
'path': path})
diff = initial_diff(path, 'link', prev_state)
changed = False
if prev_state in ('hard', 'file', 'directory', 'absent'):
changed = True
elif prev_state == 'link':
b_old_src = os.readlink(b_path)
if b_old_src != b_src:
diff['before']['src'] = to_native(b_old_src, errors='strict')
diff['after']['src'] = src
changed = True
else:
raise AnsibleModuleError(results={'msg': 'unexpected position reached', 'dest': path, 'src': src})
if changed and not module.check_mode:
if prev_state != 'absent':
# try to replace atomically
b_tmppath = to_bytes(os.path.sep).join(
[os.path.dirname(b_path), to_bytes(".%s.%s.tmp" % (os.getpid(), time.time()))]
)
try:
if prev_state == 'directory':
os.rmdir(b_path)
os.symlink(b_src, b_tmppath)
os.rename(b_tmppath, b_path)
except OSError as e:
if os.path.exists(b_tmppath):
os.unlink(b_tmppath)
raise AnsibleModuleError(results={'msg': 'Error while replacing: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
else:
try:
os.symlink(b_src, b_path)
except OSError as e:
raise AnsibleModuleError(results={'msg': 'Error while linking: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
if module.check_mode and not os.path.exists(b_path):
return {'dest': path, 'src': src, 'changed': changed, 'diff': diff}
# Now that we might have created the symlink, get the arguments.
# We need to do it now so we can properly follow the symlink if needed
# because load_file_common_arguments sets 'path' according
# the value of follow and the symlink existence.
file_args = module.load_file_common_arguments(module.params)
# Whenever we create a link to a nonexistent target we know that the nonexistent target
# cannot have any permissions set on it. Skip setting those and emit a warning (the user
# can set follow=False to remove the warning)
if follow and os.path.islink(b_path) and not os.path.exists(file_args['path']):
module.warn('Cannot set fs attributes on a non-existent symlink target. follow should be'
' set to False to avoid this.')
else:
changed = module.set_fs_attributes_if_different(file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
return {'dest': path, 'src': src, 'changed': changed, 'diff': diff}
def ensure_hardlink(path, src, follow, force, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
b_src = to_bytes(src, errors='surrogate_or_strict')
prev_state = get_state(b_path)
file_args = module.load_file_common_arguments(module.params)
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
# src is the source of a hardlink. We require it if we are creating a new hardlink.
# We require path in the argument_spec so we know it is present at this point.
if src is None:
raise AnsibleModuleError(results={'msg': 'src is required for creating new hardlinks'})
if not os.path.exists(b_src):
raise AnsibleModuleError(results={'msg': 'src does not exist', 'dest': path, 'src': src})
diff = initial_diff(path, 'hard', prev_state)
changed = False
if prev_state == 'absent':
changed = True
elif prev_state == 'link':
b_old_src = os.readlink(b_path)
if b_old_src != b_src:
diff['before']['src'] = to_native(b_old_src, errors='strict')
diff['after']['src'] = src
changed = True
elif prev_state == 'hard':
if not os.stat(b_path).st_ino == os.stat(b_src).st_ino:
changed = True
if not force:
raise AnsibleModuleError(results={'msg': 'Cannot link, different hard link exists at destination',
'dest': path, 'src': src})
elif prev_state == 'file':
changed = True
if not force:
raise AnsibleModuleError(results={'msg': 'Cannot link, %s exists at destination' % prev_state,
'dest': path, 'src': src})
elif prev_state == 'directory':
changed = True
if os.path.exists(b_path):
if os.stat(b_path).st_ino == os.stat(b_src).st_ino:
return {'path': path, 'changed': False}
elif not force:
raise AnsibleModuleError(results={'msg': 'Cannot link: different hard link exists at destination',
'dest': path, 'src': src})
else:
raise AnsibleModuleError(results={'msg': 'unexpected position reached', 'dest': path, 'src': src})
if changed and not module.check_mode:
if prev_state != 'absent':
# try to replace atomically
b_tmppath = to_bytes(os.path.sep).join(
[os.path.dirname(b_path), to_bytes(".%s.%s.tmp" % (os.getpid(), time.time()))]
)
try:
if prev_state == 'directory':
if os.path.exists(b_path):
try:
os.unlink(b_path)
except OSError as e:
if e.errno != errno.ENOENT: # It may already have been removed
raise
os.link(b_src, b_tmppath)
os.rename(b_tmppath, b_path)
except OSError as e:
if os.path.exists(b_tmppath):
os.unlink(b_tmppath)
raise AnsibleModuleError(results={'msg': 'Error while replacing: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
else:
try:
os.link(b_src, b_path)
except OSError as e:
raise AnsibleModuleError(results={'msg': 'Error while linking: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
if module.check_mode and not os.path.exists(b_path):
return {'dest': path, 'src': src, 'changed': changed, 'diff': diff}
changed = module.set_fs_attributes_if_different(file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
return {'dest': path, 'src': src, 'changed': changed, 'diff': diff}
def main():
global module
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', choices=['absent', 'directory', 'file', 'hard', 'link', 'touch']),
path=dict(type='path', required=True, aliases=['dest', 'name']),
_original_basename=dict(type='str'), # Internal use only, for recursive ops
recurse=dict(type='bool', default=False),
force=dict(type='bool', default=False), # Note: Should not be in file_common_args in future
follow=dict(type='bool', default=True), # Note: Different default than file_common_args
_diff_peek=dict(type='bool'), # Internal use only, for internal checks in the action plugins
src=dict(type='path'), # Note: Should not be in file_common_args in future
modification_time=dict(type='str'),
modification_time_format=dict(type='str', default='%Y%m%d%H%M.%S'),
access_time=dict(type='str'),
access_time_format=dict(type='str', default='%Y%m%d%H%M.%S'),
),
add_file_common_args=True,
supports_check_mode=True,
)
# When we rewrite basic.py, we will do something similar to this on instantiating an AnsibleModule
sys.excepthook = _ansible_excepthook
additional_parameter_handling(module.params)
params = module.params
state = params['state']
recurse = params['recurse']
force = params['force']
follow = params['follow']
path = params['path']
src = params['src']
timestamps = {}
timestamps['modification_time'] = keep_backward_compatibility_on_timestamps(params['modification_time'], state)
timestamps['modification_time_format'] = params['modification_time_format']
timestamps['access_time'] = keep_backward_compatibility_on_timestamps(params['access_time'], state)
timestamps['access_time_format'] = params['access_time_format']
# short-circuit for diff_peek
if params['_diff_peek'] is not None:
appears_binary = execute_diff_peek(to_bytes(path, errors='surrogate_or_strict'))
module.exit_json(path=path, changed=False, appears_binary=appears_binary)
if state == 'file':
result = ensure_file_attributes(path, follow, timestamps)
elif state == 'directory':
result = ensure_directory(path, follow, recurse, timestamps)
elif state == 'link':
result = ensure_symlink(path, src, follow, force, timestamps)
elif state == 'hard':
result = ensure_hardlink(path, src, follow, force, timestamps)
elif state == 'touch':
result = execute_touch(path, follow, timestamps)
elif state == 'absent':
result = ensure_absent(path)
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,171 |
file module no longer returning state=absent for missing files
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`file` is "stableinterface" and stopped setting state=absent in its return value. This broke real-world ansible roles. It works with ansible-2.7, broken in 2.8. There was no deprecation warning in 2.7 about the impending breakage.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
file
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.6
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
RHEL 7
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```bash
ansible localhost -m file -a owner=root\ group=root\ path=/root/nosuchdir
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```paste below
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
localhost | FAILED! => {
"changed": false,
"msg": "file (/root/nosuchdir) is absent, cannot continue",
"path": "/root/nosuchdir",
"state": "absent"
}
```
(as on ansible 2.7):
```
ansible 2.7.10
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
localhost | FAILED! => {
"changed": false,
"msg": "file (/root/nosuchdir) is absent, cannot continue",
"path": "/root/nosuchdir"
}
```
|
https://github.com/ansible/ansible/issues/66171
|
https://github.com/ansible/ansible/pull/66503
|
e0f25a2b1f9e6c21f751ba0ed2dc2eee2152983e
|
cd8920af998e297a549a4f05cf4a4b3656d7d67e
| 2020-01-03T07:07:34Z |
python
| 2020-05-21T20:35:45Z |
test/integration/targets/file/tasks/main.yml
|
# Test code for the file module.
# (c) 2014, Richard Isaacson <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
- set_fact: output_file={{output_dir}}/foo.txt
# same as expanduser & expandvars called on managed host
- command: 'echo {{ output_file }}'
register: echo
- set_fact:
remote_file_expanded: '{{ echo.stdout }}'
# Import the test tasks
- name: Run tests for state=link
import_tasks: state_link.yml
- name: Run tests for directory as dest
import_tasks: directory_as_dest.yml
- name: Run tests for unicode
import_tasks: unicode_path.yml
environment:
LC_ALL: C
LANG: C
- name: decide to include or not include selinux tests
include_tasks: selinux_tests.yml
when: selinux_installed is defined and selinux_installed.stdout != "" and selinux_enabled.stdout != "Disabled"
- name: Initialize the test output dir
import_tasks: initialize.yml
- name: Test _diff_peek
import_tasks: diff_peek.yml
# These tests need to be organized by state parameter into separate files later
- name: verify that we are checking a file and it is present
file: path={{output_file}} state=file
register: file_result
- name: verify that the file was marked as changed
assert:
that:
- "file_result.changed == false"
- "file_result.state == 'file'"
- name: verify that we are checking an absent file
file: path={{output_dir}}/bar.txt state=absent
register: file2_result
- name: verify that the file was marked as changed
assert:
that:
- "file2_result.changed == false"
- "file2_result.state == 'absent'"
- name: verify we can touch a file
file: path={{output_dir}}/baz.txt state=touch
register: file3_result
- name: verify that the file was marked as changed
assert:
that:
- "file3_result.changed == true"
- "file3_result.state == 'file'"
- "file3_result.mode == '0644'"
- name: change file mode
file: path={{output_dir}}/baz.txt mode=0600
register: file4_result
- name: verify that the file was marked as changed
assert:
that:
- "file4_result.changed == true"
- "file4_result.mode == '0600'"
- name: explicitly set file attribute "A"
file: path={{output_dir}}/baz.txt attributes=A
register: file_attributes_result
ignore_errors: True
- name: add file attribute "A"
file: path={{output_dir}}/baz.txt attributes=+A
register: file_attributes_result_2
when: file_attributes_result is changed
- name: verify that the file was not marked as changed
assert:
that:
- "file_attributes_result_2 is not changed"
when: file_attributes_result is changed
- name: remove file attribute "A"
file: path={{output_dir}}/baz.txt attributes=-A
register: file_attributes_result_3
ignore_errors: True
- name: explicitly remove file attributes
file: path={{output_dir}}/baz.txt attributes=""
register: file_attributes_result_4
when: file_attributes_result_3 is changed
- name: verify that the file was not marked as changed
assert:
that:
- "file_attributes_result_4 is not changed"
when: file_attributes_result_4 is changed
- name: change ownership and group
file: path={{output_dir}}/baz.txt owner=1234 group=1234
- name: Get stat info to check atime later
stat: path={{output_dir}}/baz.txt
register: file_attributes_result_5_before
- name: updates access time
file: path={{output_dir}}/baz.txt access_time=now
register: file_attributes_result_5
- name: Get stat info to check atime later
stat: path={{output_dir}}/baz.txt
register: file_attributes_result_5_after
- name: verify that the file was marked as changed and atime changed
assert:
that:
- "file_attributes_result_5 is changed"
- "file_attributes_result_5_after['stat']['atime'] != file_attributes_result_5_before['stat']['atime']"
- name: setup a tmp-like directory for ownership test
file: path=/tmp/worldwritable mode=1777 state=directory
- name: Ask to create a file without enough perms to change ownership
file: path=/tmp/worldwritable/baz.txt state=touch owner=root
become: yes
become_user: nobody
register: chown_result
ignore_errors: True
- name: Ask whether the new file exists
stat: path=/tmp/worldwritable/baz.txt
register: file_exists_result
- name: Verify that the file doesn't exist on failure
assert:
that:
- "chown_result.failed == True"
- "file_exists_result.stat.exists == False"
- name: clean up
file: path=/tmp/worldwritable state=absent
- name: create hard link to file
file: src={{output_file}} dest={{output_dir}}/hard.txt state=hard
register: file6_result
- name: verify that the file was marked as changed
assert:
that:
- "file6_result.changed == true"
- name: touch a hard link
file:
dest: '{{ output_dir }}/hard.txt'
state: 'touch'
register: file6_touch_result
- name: verify that the hard link was touched
assert:
that:
- "file6_touch_result.changed == true"
- name: stat1
stat: path={{output_file}}
register: hlstat1
- name: stat2
stat: path={{output_dir}}/hard.txt
register: hlstat2
- name: verify that hard link is still the same after timestamp updated
assert:
that:
- "hlstat1.stat.inode == hlstat2.stat.inode"
- name: create hard link to file 2
file: src={{output_file}} dest={{output_dir}}/hard.txt state=hard
register: hlink_result
- name: verify that hard link creation is idempotent
assert:
that:
- "hlink_result.changed == False"
- name: Change mode on a hard link
file: src={{output_file}} dest={{output_dir}}/hard.txt mode=0701
register: file6_mode_change
- name: verify that the hard link was touched
assert:
that:
- "file6_touch_result.changed == true"
- name: stat1
stat: path={{output_file}}
register: hlstat1
- name: stat2
stat: path={{output_dir}}/hard.txt
register: hlstat2
- name: verify that hard link is still the same after timestamp updated
assert:
that:
- "hlstat1.stat.inode == hlstat2.stat.inode"
- "hlstat1.stat.mode == '0701'"
- name: create a directory
file: path={{output_dir}}/foobar state=directory
register: file7_result
- name: verify that the file was marked as changed
assert:
that:
- "file7_result.changed == true"
- "file7_result.state == 'directory'"
- name: determine if selinux is installed
shell: which getenforce || exit 0
register: selinux_installed
- name: determine if selinux is enabled
shell: getenforce
register: selinux_enabled
when: selinux_installed.stdout != ""
ignore_errors: true
- name: remove directory foobar
file: path={{output_dir}}/foobar state=absent
- name: remove file foo.txt
file: path={{output_dir}}/foo.txt state=absent
- name: remove file bar.txt
file: path={{output_dir}}/foo.txt state=absent
- name: remove file baz.txt
file: path={{output_dir}}/foo.txt state=absent
- name: copy directory structure over
copy: src=foobar dest={{output_dir}}
- name: check what would be removed if folder state was absent and diff is enabled
file:
path: "{{ item }}"
state: absent
check_mode: yes
diff: yes
with_items:
- "{{ output_dir }}"
- "{{ output_dir }}/foobar/fileA"
register: folder_absent_result
- name: 'assert that the "absent" state lists expected files and folders for only directories'
assert:
that:
- folder_absent_result.results[0].diff.before.path_content is defined
- folder_absent_result.results[1].diff.before.path_content is not defined
- test_folder in folder_absent_result.results[0].diff.before.path_content.directories
- test_file in folder_absent_result.results[0].diff.before.path_content.files
vars:
test_folder: "{{ folder_absent_result.results[0].path }}/foobar"
test_file: "{{ folder_absent_result.results[0].path }}/foobar/fileA"
- name: Change ownership of a directory with recurse=no(default)
file: path={{output_dir}}/foobar owner=1234
- name: verify that the permission of the directory was set
file: path={{output_dir}}/foobar state=directory
register: file8_result
- name: assert that the directory has changed to have owner 1234
assert:
that:
- "file8_result.uid == 1234"
- name: verify that the permission of a file under the directory was not set
file: path={{output_dir}}/foobar/fileA state=file
register: file9_result
- name: assert the file owner has not changed to 1234
assert:
that:
- "file9_result.uid != 1234"
- name: change the ownership of a directory with recurse=yes
file: path={{output_dir}}/foobar owner=1235 recurse=yes
- name: verify that the permission of the directory was set
file: path={{output_dir}}/foobar state=directory
register: file10_result
- name: assert that the directory has changed to have owner 1235
assert:
that:
- "file10_result.uid == 1235"
- name: verify that the permission of a file under the directory was not set
file: path={{output_dir}}/foobar/fileA state=file
register: file11_result
- name: assert that the file has changed to have owner 1235
assert:
that:
- "file11_result.uid == 1235"
- name: remove directory foobar
file: path={{output_dir}}/foobar state=absent
register: file14_result
- name: verify that the directory was removed
assert:
that:
- 'file14_result.changed == true'
- 'file14_result.state == "absent"'
- name: create a test sub-directory
file: dest={{output_dir}}/sub1 state=directory
register: file15_result
- name: verify that the new directory was created
assert:
that:
- 'file15_result.changed == true'
- 'file15_result.state == "directory"'
- name: create test files in the sub-directory
file: dest={{output_dir}}/sub1/{{item}} state=touch
with_items:
- file1
- file2
- file3
register: file16_result
- name: verify the files were created
assert:
that:
- 'item.changed == true'
- 'item.state == "file"'
with_items: "{{file16_result.results}}"
- name: test file creation with symbolic mode
file: dest={{output_dir}}/test_symbolic state=touch mode=u=rwx,g=rwx,o=rwx
register: result
- name: assert file mode
assert:
that:
- result.mode == '0777'
- name: modify symbolic mode for all
file: dest={{output_dir}}/test_symbolic state=touch mode=a=r
register: result
- name: assert file mode
assert:
that:
- result.mode == '0444'
- name: modify symbolic mode for owner
file: dest={{output_dir}}/test_symbolic state=touch mode=u+w
register: result
- name: assert file mode
assert:
that:
- result.mode == '0644'
- name: modify symbolic mode for group
file: dest={{output_dir}}/test_symbolic state=touch mode=g+w
register: result
- name: assert file mode
assert:
that:
- result.mode == '0664'
- name: modify symbolic mode for world
file: dest={{output_dir}}/test_symbolic state=touch mode=o+w
register: result
- name: assert file mode
assert:
that:
- result.mode == '0666'
- name: modify symbolic mode for owner
file: dest={{output_dir}}/test_symbolic state=touch mode=u+x
register: result
- name: assert file mode
assert:
that:
- result.mode == '0766'
- name: modify symbolic mode for group
file: dest={{output_dir}}/test_symbolic state=touch mode=g+x
register: result
- name: assert file mode
assert:
that:
- result.mode == '0776'
- name: modify symbolic mode for world
file: dest={{output_dir}}/test_symbolic state=touch mode=o+x
register: result
- name: assert file mode
assert:
that:
- result.mode == '0777'
- name: remove symbolic mode for world
file: dest={{output_dir}}/test_symbolic state=touch mode=o-wx
register: result
- name: assert file mode
assert:
that:
- result.mode == '0774'
- name: remove symbolic mode for group
file: dest={{output_dir}}/test_symbolic state=touch mode=g-wx
register: result
- name: assert file mode
assert:
that:
- result.mode == '0744'
- name: remove symbolic mode for owner
file: dest={{output_dir}}/test_symbolic state=touch mode=u-wx
register: result
- name: assert file mode
assert:
that:
- result.mode == '0444'
- name: set sticky bit with symbolic mode
file: dest={{output_dir}}/test_symbolic state=touch mode=o+t
register: result
- name: assert file mode
assert:
that:
- result.mode == '01444'
- name: remove sticky bit with symbolic mode
file: dest={{output_dir}}/test_symbolic state=touch mode=o-t
register: result
- name: assert file mode
assert:
that:
- result.mode == '0444'
- name: add setgid with symbolic mode
file: dest={{output_dir}}/test_symbolic state=touch mode=g+s
register: result
- name: assert file mode
assert:
that:
- result.mode == '02444'
- name: remove setgid with symbolic mode
file: dest={{output_dir}}/test_symbolic state=touch mode=g-s
register: result
- name: assert file mode
assert:
that:
- result.mode == '0444'
- name: add setuid with symbolic mode
file: dest={{output_dir}}/test_symbolic state=touch mode=u+s
register: result
- name: assert file mode
assert:
that:
- result.mode == '04444'
- name: remove setuid with symbolic mode
file: dest={{output_dir}}/test_symbolic state=touch mode=u-s
register: result
- name: assert file mode
assert:
that:
- result.mode == '0444'
# https://github.com/ansible/ansible/issues/50943
# Need to use /tmp as nobody can't access output_dir at all
- name: create file as root with all write permissions
file: dest=/tmp/write_utime state=touch mode=0666 owner={{ansible_user_id}}
- name: Pause to ensure stat times are not the exact same
pause:
seconds: 1
- block:
- name: get previous time
stat: path=/tmp/write_utime
register: previous_time
- name: pause for 1 second to ensure the next touch is newer
pause: seconds=1
- name: touch file as nobody
file: dest=/tmp/write_utime state=touch
become: True
become_user: nobody
register: result
- name: get new time
stat: path=/tmp/write_utime
register: current_time
always:
- name: remove test utime file
file: path=/tmp/write_utime state=absent
- name: assert touch file as nobody
assert:
that:
- result is changed
- current_time.stat.atime > previous_time.stat.atime
- current_time.stat.mtime > previous_time.stat.mtime
# Follow + recursive tests
- name: create a toplevel directory
file: path={{output_dir}}/test_follow_rec state=directory mode=0755
- name: create a file outside of the toplevel
file: path={{output_dir}}/test_follow_rec_target_file state=touch mode=0700
- name: create a directory outside of the toplevel
file: path={{output_dir}}/test_follow_rec_target_dir state=directory mode=0700
- name: create a file inside of the link target directory
file: path={{output_dir}}/test_follow_rec_target_dir/foo state=touch mode=0700
- name: create a symlink to the file
file: path={{output_dir}}/test_follow_rec/test_link state=link src="../test_follow_rec_target_file"
- name: create a symlink to the directory
file: path={{output_dir}}/test_follow_rec/test_link_dir state=link src="../test_follow_rec_target_dir"
- name: create a symlink to a nonexistent file
file: path={{output_dir}}/test_follow_rec/nonexistent state=link src=does_not_exist force=True
- name: try to change permissions without following symlinks
file: path={{output_dir}}/test_follow_rec follow=False mode="a-x" recurse=True
- name: stat the link file target
stat: path={{output_dir}}/test_follow_rec_target_file
register: file_result
- name: stat the link dir target
stat: path={{output_dir}}/test_follow_rec_target_dir
register: dir_result
- name: stat the file inside the link dir target
stat: path={{output_dir}}/test_follow_rec_target_dir/foo
register: file_in_dir_result
- name: assert that the link targets were unmodified
assert:
that:
- file_result.stat.mode == '0700'
- dir_result.stat.mode == '0700'
- file_in_dir_result.stat.mode == '0700'
- name: try to change permissions with following symlinks
file: path={{output_dir}}/test_follow_rec follow=True mode="a-x" recurse=True
- name: stat the link file target
stat: path={{output_dir}}/test_follow_rec_target_file
register: file_result
- name: stat the link dir target
stat: path={{output_dir}}/test_follow_rec_target_dir
register: dir_result
- name: stat the file inside the link dir target
stat: path={{output_dir}}/test_follow_rec_target_dir/foo
register: file_in_dir_result
- name: assert that the link targets were modified
assert:
that:
- file_result.stat.mode == '0600'
- dir_result.stat.mode == '0600'
- file_in_dir_result.stat.mode == '0600'
# https://github.com/ansible/ansible/issues/55971
- name: Test missing src and path
file:
state: hard
register: file_error1
ignore_errors: yes
- assert:
that:
- "file_error1 is failed"
- "file_error1.msg == 'missing required arguments: path'"
- name: Test missing src
file:
dest: "{{ output_dir }}/hard.txt"
state: hard
register: file_error2
ignore_errors: yes
- assert:
that:
- "file_error2 is failed"
- "file_error2.msg == 'src is required for creating new hardlinks'"
- name: Test non-existing src
file:
src: non-existing-file-that-does-not-exist.txt
dest: "{{ output_dir }}/hard.txt"
state: hard
register: file_error3
ignore_errors: yes
- assert:
that:
- "file_error3 is failed"
- "file_error3.msg == 'src does not exist'"
- "file_error3.dest == '{{ output_dir }}/hard.txt' | expanduser"
- "file_error3.src == 'non-existing-file-that-does-not-exist.txt'"
- block:
- name: Create a testing file
file:
dest: original_file.txt
state: touch
- name: Test relative path with state=hard
file:
src: original_file.txt
dest: hard_link_file.txt
state: hard
register: hard_link_relpath
- name: Just check if it was successful, we don't care about the actual hard link in this test
assert:
that:
- "hard_link_relpath is success"
always:
- name: Clean up
file:
path: "{{ item }}"
state: absent
loop:
- original_file.txt
- hard_link_file.txt
# END #55971
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,457 |
Using the free strategy with a mix of tasks and roles with handlers can lead to failure
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
If you use the free strategy with a complex playbook that has a mix of regular tasks, roles and handlers, some nodes may fail because the strategy attempts to load ansible roles as files when handlers are run.
I tracked it down to the _do_handler_run in the lib/ansible/plugins/strategy/__init__.py because included_files are not check to see if they are a role before being included.
https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/strategy/free.py#L240-L259
vs
https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/strategy/__init__.py#L1005-L1031
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
free.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/centos/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
CentOS8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
TBD.
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
No error because it correctly loads the role
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
fatal: [overcloud-novacompute-0]: FAILED! => {
"reason": "Could not find or access '/home/centos/config-download/overcloud/tripleo_timezone' on the Ansible Controller. Traceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/ansible/plugins/strategy/__init__.py\", line 869, in _load_included_file\n data = self._loader.load_from_file(included_file._filename)\n File \"/usr/lib/python3.6/site-packages/ansible/parsing/dataloader.py\", line 94, in load_from_file\n (b_file_data, show_content) = self._get_file_contents(file_name)\n File \"/usr/lib/python3.6/site-packages/ansible/parsing/dataloader.py\", line 162, in _get_file_contents\n raise AnsibleFileNotFound(\"Unable to retrieve file contents\", file_name=file_name)\nansible.errors.AnsibleFileNotFound: Unable to retrieve file contents\nCould not find or access '/home/centos/config-download/overcloud/tripleo_timezone' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option\n"
}
```
|
https://github.com/ansible/ansible/issues/69457
|
https://github.com/ansible/ansible/pull/69498
|
cd8920af998e297a549a4f05cf4a4b3656d7d67e
|
a4072ad0e9c718b6946d599ba05c8a67e26a8195
| 2020-05-12T16:16:27Z |
python
| 2020-05-21T20:55:08Z |
changelogs/fragments/69457-free-strategy-handler-race.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,457 |
Using the free strategy with a mix of tasks and roles with handlers can lead to failure
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
If you use the free strategy with a complex playbook that has a mix of regular tasks, roles and handlers, some nodes may fail because the strategy attempts to load ansible roles as files when handlers are run.
I tracked it down to the _do_handler_run in the lib/ansible/plugins/strategy/__init__.py because included_files are not check to see if they are a role before being included.
https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/strategy/free.py#L240-L259
vs
https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/strategy/__init__.py#L1005-L1031
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
free.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/centos/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
CentOS8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
TBD.
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
No error because it correctly loads the role
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
fatal: [overcloud-novacompute-0]: FAILED! => {
"reason": "Could not find or access '/home/centos/config-download/overcloud/tripleo_timezone' on the Ansible Controller. Traceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/ansible/plugins/strategy/__init__.py\", line 869, in _load_included_file\n data = self._loader.load_from_file(included_file._filename)\n File \"/usr/lib/python3.6/site-packages/ansible/parsing/dataloader.py\", line 94, in load_from_file\n (b_file_data, show_content) = self._get_file_contents(file_name)\n File \"/usr/lib/python3.6/site-packages/ansible/parsing/dataloader.py\", line 162, in _get_file_contents\n raise AnsibleFileNotFound(\"Unable to retrieve file contents\", file_name=file_name)\nansible.errors.AnsibleFileNotFound: Unable to retrieve file contents\nCould not find or access '/home/centos/config-download/overcloud/tripleo_timezone' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option\n"
}
```
|
https://github.com/ansible/ansible/issues/69457
|
https://github.com/ansible/ansible/pull/69498
|
cd8920af998e297a549a4f05cf4a4b3656d7d67e
|
a4072ad0e9c718b6946d599ba05c8a67e26a8195
| 2020-05-12T16:16:27Z |
python
| 2020-05-21T20:55:08Z |
lib/ansible/plugins/strategy/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import cmd
import functools
import os
import pprint
import sys
import threading
import time
from collections import deque
from multiprocessing import Lock
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleParserError, AnsibleUndefinedVariable
from ansible.executor import action_write_locks
from ansible.executor.process.worker import WorkerProcess
from ansible.executor.task_result import TaskResult
from ansible.inventory.host import Host
from ansible.module_utils.six.moves import queue as Queue
from ansible.module_utils.six import iteritems, itervalues, string_types
from ansible.module_utils._text import to_text
from ansible.module_utils.connection import Connection, ConnectionError
from ansible.playbook.helpers import load_list_of_blocks
from ansible.playbook.included_file import IncludedFile
from ansible.playbook.task_include import TaskInclude
from ansible.plugins import loader as plugin_loader
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars
from ansible.vars.clean import strip_internal_keys, module_response_deepcopy
display = Display()
__all__ = ['StrategyBase']
# This list can be an exact match, or start of string bound
# does not accept regex
ALWAYS_DELEGATE_FACT_PREFIXES = frozenset((
'discovered_interpreter_',
))
class StrategySentinel:
pass
def SharedPluginLoaderObj():
'''This only exists for backwards compat, do not use.
'''
display.deprecated('SharedPluginLoaderObj is deprecated, please directly use ansible.plugins.loader',
version='2.11')
return plugin_loader
_sentinel = StrategySentinel()
def results_thread_main(strategy):
while True:
try:
result = strategy._final_q.get()
if isinstance(result, StrategySentinel):
break
else:
strategy._results_lock.acquire()
strategy._results.append(result)
strategy._results_lock.release()
except (IOError, EOFError):
break
except Queue.Empty:
pass
def debug_closure(func):
"""Closure to wrap ``StrategyBase._process_pending_results`` and invoke the task debugger"""
@functools.wraps(func)
def inner(self, iterator, one_pass=False, max_passes=None):
status_to_stats_map = (
('is_failed', 'failures'),
('is_unreachable', 'dark'),
('is_changed', 'changed'),
('is_skipped', 'skipped'),
)
# We don't know the host yet, copy the previous states, for lookup after we process new results
prev_host_states = iterator._host_states.copy()
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes)
_processed_results = []
for result in results:
task = result._task
host = result._host
_queued_task_args = self._queued_task_cache.pop((host.name, task._uuid), None)
task_vars = _queued_task_args['task_vars']
play_context = _queued_task_args['play_context']
# Try to grab the previous host state, if it doesn't exist use get_host_state to generate an empty state
try:
prev_host_state = prev_host_states[host.name]
except KeyError:
prev_host_state = iterator.get_host_state(host)
while result.needs_debugger(globally_enabled=self.debugger_active):
next_action = NextAction()
dbg = Debugger(task, host, task_vars, play_context, result, next_action)
dbg.cmdloop()
if next_action.result == NextAction.REDO:
# rollback host state
self._tqm.clear_failed_hosts()
iterator._host_states[host.name] = prev_host_state
for method, what in status_to_stats_map:
if getattr(result, method)():
self._tqm._stats.decrement(what, host.name)
self._tqm._stats.decrement('ok', host.name)
# redo
self._queue_task(host, task, task_vars, play_context)
_processed_results.extend(debug_closure(func)(self, iterator, one_pass))
break
elif next_action.result == NextAction.CONTINUE:
_processed_results.append(result)
break
elif next_action.result == NextAction.EXIT:
# Matches KeyboardInterrupt from bin/ansible
sys.exit(99)
else:
_processed_results.append(result)
return _processed_results
return inner
class StrategyBase:
'''
This is the base class for strategy plugins, which contains some common
code useful to all strategies like running handlers, cleanup actions, etc.
'''
# by default, strategies should support throttling but we allow individual
# strategies to disable this and either forego supporting it or managing
# the throttling internally (as `free` does)
ALLOW_BASE_THROTTLING = True
def __init__(self, tqm):
self._tqm = tqm
self._inventory = tqm.get_inventory()
self._workers = tqm._workers
self._variable_manager = tqm.get_variable_manager()
self._loader = tqm.get_loader()
self._final_q = tqm._final_q
self._step = context.CLIARGS.get('step', False)
self._diff = context.CLIARGS.get('diff', False)
self.flush_cache = context.CLIARGS.get('flush_cache', False)
# the task cache is a dictionary of tuples of (host.name, task._uuid)
# used to find the original task object of in-flight tasks and to store
# the task args/vars and play context info used to queue the task.
self._queued_task_cache = {}
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
# internal counters
self._pending_results = 0
self._cur_worker = 0
# this dictionary is used to keep track of hosts that have
# outstanding tasks still in queue
self._blocked_hosts = dict()
# this dictionary is used to keep track of hosts that have
# flushed handlers
self._flushed_hosts = dict()
self._results = deque()
self._results_lock = threading.Condition(threading.Lock())
# create the result processing thread for reading results in the background
self._results_thread = threading.Thread(target=results_thread_main, args=(self,))
self._results_thread.daemon = True
self._results_thread.start()
# holds the list of active (persistent) connections to be shutdown at
# play completion
self._active_connections = dict()
# Caches for get_host calls, to avoid calling excessively
# These values should be set at the top of the ``run`` method of each
# strategy plugin. Use ``_set_hosts_cache`` to set these values
self._hosts_cache = []
self._hosts_cache_all = []
self.debugger_active = C.ENABLE_TASK_DEBUGGER
def _set_hosts_cache(self, play, refresh=True):
"""Responsible for setting _hosts_cache and _hosts_cache_all
See comment in ``__init__`` for the purpose of these caches
"""
if not refresh and all((self._hosts_cache, self._hosts_cache_all)):
return
if Templar(None).is_template(play.hosts):
_pattern = 'all'
else:
_pattern = play.hosts or 'all'
self._hosts_cache_all = [h.name for h in self._inventory.get_hosts(pattern=_pattern, ignore_restrictions=True)]
self._hosts_cache = [h.name for h in self._inventory.get_hosts(play.hosts, order=play.order)]
def cleanup(self):
# close active persistent connections
for sock in itervalues(self._active_connections):
try:
conn = Connection(sock)
conn.reset()
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
self._final_q.put(_sentinel)
self._results_thread.join()
def run(self, iterator, play_context, result=0):
# execute one more pass through the iterator without peeking, to
# make sure that all of the hosts are advanced to their final task.
# This should be safe, as everything should be ITERATING_COMPLETE by
# this point, though the strategy may not advance the hosts itself.
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
iterator.get_next_task_for_host(self._inventory.hosts[host])
except KeyError:
iterator.get_next_task_for_host(self._inventory.get_host(host))
# save the failed/unreachable hosts, as the run_handlers()
# method will clear that information during its execution
failed_hosts = iterator.get_failed_hosts()
unreachable_hosts = self._tqm._unreachable_hosts.keys()
display.debug("running handlers")
handler_result = self.run_handlers(iterator, play_context)
if isinstance(handler_result, bool) and not handler_result:
result |= self._tqm.RUN_ERROR
elif not handler_result:
result |= handler_result
# now update with the hosts (if any) that failed or were
# unreachable during the handler execution phase
failed_hosts = set(failed_hosts).union(iterator.get_failed_hosts())
unreachable_hosts = set(unreachable_hosts).union(self._tqm._unreachable_hosts.keys())
# return the appropriate code, depending on the status hosts after the run
if not isinstance(result, bool) and result != self._tqm.RUN_OK:
return result
elif len(unreachable_hosts) > 0:
return self._tqm.RUN_UNREACHABLE_HOSTS
elif len(failed_hosts) > 0:
return self._tqm.RUN_FAILED_HOSTS
else:
return self._tqm.RUN_OK
def get_hosts_remaining(self, play):
self._set_hosts_cache(play, refresh=False)
ignore = set(self._tqm._failed_hosts).union(self._tqm._unreachable_hosts)
return [host for host in self._hosts_cache if host not in ignore]
def get_failed_hosts(self, play):
self._set_hosts_cache(play, refresh=False)
return [host for host in self._hosts_cache if host in self._tqm._failed_hosts]
def add_tqm_variables(self, vars, play):
'''
Base class method to add extra variables/information to the list of task
vars sent through the executor engine regarding the task queue manager state.
'''
vars['ansible_current_hosts'] = self.get_hosts_remaining(play)
vars['ansible_failed_hosts'] = self.get_failed_hosts(play)
def _queue_task(self, host, task, task_vars, play_context):
''' handles queueing the task up to be sent to a worker '''
display.debug("entering _queue_task() for %s/%s" % (host.name, task.action))
# Add a write lock for tasks.
# Maybe this should be added somewhere further up the call stack but
# this is the earliest in the code where we have task (1) extracted
# into its own variable and (2) there's only a single code path
# leading to the module being run. This is called by three
# functions: __init__.py::_do_handler_run(), linear.py::run(), and
# free.py::run() so we'd have to add to all three to do it there.
# The next common higher level is __init__.py::run() and that has
# tasks inside of play_iterator so we'd have to extract them to do it
# there.
if task.action not in action_write_locks.action_write_locks:
display.debug('Creating lock for %s' % task.action)
action_write_locks.action_write_locks[task.action] = Lock()
# create a templar and template things we need later for the queuing process
templar = Templar(loader=self._loader, variables=task_vars)
try:
throttle = int(templar.template(task.throttle))
except Exception as e:
raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e)
# and then queue the new task
try:
# Determine the "rewind point" of the worker list. This means we start
# iterating over the list of workers until the end of the list is found.
# Normally, that is simply the length of the workers list (as determined
# by the forks or serial setting), however a task/block/play may "throttle"
# that limit down.
rewind_point = len(self._workers)
if throttle > 0 and self.ALLOW_BASE_THROTTLING:
if task.run_once:
display.debug("Ignoring 'throttle' as 'run_once' is also set for '%s'" % task.get_name())
else:
if throttle <= rewind_point:
display.debug("task: %s, throttle: %d" % (task.get_name(), throttle))
rewind_point = throttle
queued = False
starting_worker = self._cur_worker
while True:
if self._cur_worker >= rewind_point:
self._cur_worker = 0
worker_prc = self._workers[self._cur_worker]
if worker_prc is None or not worker_prc.is_alive():
self._queued_task_cache[(host.name, task._uuid)] = {
'host': host,
'task': task,
'task_vars': task_vars,
'play_context': play_context
}
worker_prc = WorkerProcess(self._final_q, task_vars, host, task, play_context, self._loader, self._variable_manager, plugin_loader)
self._workers[self._cur_worker] = worker_prc
self._tqm.send_callback('v2_runner_on_start', host, task)
worker_prc.start()
display.debug("worker is %d (out of %d available)" % (self._cur_worker + 1, len(self._workers)))
queued = True
self._cur_worker += 1
if self._cur_worker >= rewind_point:
self._cur_worker = 0
if queued:
break
elif self._cur_worker == starting_worker:
time.sleep(0.0001)
self._pending_results += 1
except (EOFError, IOError, AssertionError) as e:
# most likely an abort
display.debug("got an error while queuing: %s" % e)
return
display.debug("exiting _queue_task() for %s/%s" % (host.name, task.action))
def get_task_hosts(self, iterator, task_host, task):
if task.run_once:
host_list = [host for host in self._hosts_cache if host not in self._tqm._unreachable_hosts]
else:
host_list = [task_host.name]
return host_list
def get_delegated_hosts(self, result, task):
host_name = result.get('_ansible_delegated_vars', {}).get('ansible_delegated_host', None)
return [host_name or task.delegate_to]
def _set_always_delegated_facts(self, result, task):
"""Sets host facts for ``delegate_to`` hosts for facts that should
always be delegated
This operation mutates ``result`` to remove the always delegated facts
See ``ALWAYS_DELEGATE_FACT_PREFIXES``
"""
if task.delegate_to is None:
return
facts = result['ansible_facts']
always_keys = set()
_add = always_keys.add
for fact_key in facts:
for always_key in ALWAYS_DELEGATE_FACT_PREFIXES:
if fact_key.startswith(always_key):
_add(fact_key)
if always_keys:
_pop = facts.pop
always_facts = {
'ansible_facts': dict((k, _pop(k)) for k in list(facts) if k in always_keys)
}
host_list = self.get_delegated_hosts(result, task)
_set_host_facts = self._variable_manager.set_host_facts
for target_host in host_list:
_set_host_facts(target_host, always_facts)
@debug_closure
def _process_pending_results(self, iterator, one_pass=False, max_passes=None):
'''
Reads results off the final queue and takes appropriate action
based on the result (executing callbacks, updating state, etc.).
'''
ret_results = []
handler_templar = Templar(self._loader)
def get_original_host(host_name):
# FIXME: this should not need x2 _inventory
host_name = to_text(host_name)
if host_name in self._inventory.hosts:
return self._inventory.hosts[host_name]
else:
return self._inventory.get_host(host_name)
def search_handler_blocks_by_name(handler_name, handler_blocks):
# iterate in reversed order since last handler loaded with the same name wins
for handler_block in reversed(handler_blocks):
for handler_task in handler_block.block:
if handler_task.name:
if not handler_task.cached_name:
if handler_templar.is_template(handler_task.name):
handler_templar.available_variables = self._variable_manager.get_vars(play=iterator._play,
task=handler_task,
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all)
handler_task.name = handler_templar.template(handler_task.name)
handler_task.cached_name = True
try:
# first we check with the full result of get_name(), which may
# include the role name (if the handler is from a role). If that
# is not found, we resort to the simple name field, which doesn't
# have anything extra added to it.
candidates = (
handler_task.name,
handler_task.get_name(include_role_fqcn=False),
handler_task.get_name(include_role_fqcn=True),
)
if handler_name in candidates:
return handler_task
except (UndefinedError, AnsibleUndefinedVariable):
# We skip this handler due to the fact that it may be using
# a variable in the name that was conditionally included via
# set_fact or some other method, and we don't want to error
# out unnecessarily
continue
return None
cur_pass = 0
while True:
try:
self._results_lock.acquire()
task_result = self._results.popleft()
except IndexError:
break
finally:
self._results_lock.release()
# get the original host and task. We then assign them to the TaskResult for use in callbacks/etc.
original_host = get_original_host(task_result._host)
queue_cache_entry = (original_host.name, task_result._task)
found_task = self._queued_task_cache.get(queue_cache_entry)['task']
original_task = found_task.copy(exclude_parent=True, exclude_tasks=True)
original_task._parent = found_task._parent
original_task.from_attrs(task_result._task_fields)
task_result._host = original_host
task_result._task = original_task
# send callbacks for 'non final' results
if '_ansible_retry' in task_result._result:
self._tqm.send_callback('v2_runner_retry', task_result)
continue
elif '_ansible_item_result' in task_result._result:
if task_result.is_failed() or task_result.is_unreachable():
self._tqm.send_callback('v2_runner_item_on_failed', task_result)
elif task_result.is_skipped():
self._tqm.send_callback('v2_runner_item_on_skipped', task_result)
else:
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
self._tqm.send_callback('v2_runner_item_on_ok', task_result)
continue
if original_task.register:
host_list = self.get_task_hosts(iterator, original_host, original_task)
clean_copy = strip_internal_keys(module_response_deepcopy(task_result._result))
if 'invocation' in clean_copy:
del clean_copy['invocation']
for target_host in host_list:
self._variable_manager.set_nonpersistent_facts(target_host, {original_task.register: clean_copy})
# all host status messages contain 2 entries: (msg, task_result)
role_ran = False
if task_result.is_failed():
role_ran = True
ignore_errors = original_task.ignore_errors
if not ignore_errors:
display.debug("marking %s as failed" % original_host.name)
if original_task.run_once:
# if we're using run_once, we have to fail every host here
for h in self._inventory.get_hosts(iterator._play.hosts):
if h.name not in self._tqm._unreachable_hosts:
state, _ = iterator.get_next_task_for_host(h, peek=True)
iterator.mark_host_failed(h)
state, new_task = iterator.get_next_task_for_host(h, peek=True)
else:
iterator.mark_host_failed(original_host)
# grab the current state and if we're iterating on the rescue portion
# of a block then we save the failed task in a special var for use
# within the rescue/always
state, _ = iterator.get_next_task_for_host(original_host, peek=True)
if iterator.is_failed(original_host) and state and state.run_state == iterator.ITERATING_COMPLETE:
self._tqm._failed_hosts[original_host.name] = True
if state and iterator.get_active_state(state).run_state == iterator.ITERATING_RESCUE:
self._tqm._stats.increment('rescued', original_host.name)
self._variable_manager.set_nonpersistent_facts(
original_host.name,
dict(
ansible_failed_task=original_task.serialize(),
ansible_failed_result=task_result._result,
),
)
else:
self._tqm._stats.increment('failures', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
self._tqm.send_callback('v2_runner_on_failed', task_result, ignore_errors=ignore_errors)
elif task_result.is_unreachable():
ignore_unreachable = original_task.ignore_unreachable
if not ignore_unreachable:
self._tqm._unreachable_hosts[original_host.name] = True
iterator._play._removed_hosts.append(original_host.name)
else:
self._tqm._stats.increment('skipped', original_host.name)
task_result._result['skip_reason'] = 'Host %s is unreachable' % original_host.name
self._tqm._stats.increment('dark', original_host.name)
self._tqm.send_callback('v2_runner_on_unreachable', task_result)
elif task_result.is_skipped():
self._tqm._stats.increment('skipped', original_host.name)
self._tqm.send_callback('v2_runner_on_skipped', task_result)
else:
role_ran = True
if original_task.loop:
# this task had a loop, and has more than one result, so
# loop over all of them instead of a single result
result_items = task_result._result.get('results', [])
else:
result_items = [task_result._result]
for result_item in result_items:
if '_ansible_notify' in result_item:
if task_result.is_changed():
# The shared dictionary for notified handlers is a proxy, which
# does not detect when sub-objects within the proxy are modified.
# So, per the docs, we reassign the list so the proxy picks up and
# notifies all other threads
for handler_name in result_item['_ansible_notify']:
found = False
# Find the handler using the above helper. First we look up the
# dependency chain of the current task (if it's from a role), otherwise
# we just look through the list of handlers in the current play/all
# roles and use the first one that matches the notify name
target_handler = search_handler_blocks_by_name(handler_name, iterator._play.handlers)
if target_handler is not None:
found = True
if target_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', target_handler, original_host)
for listening_handler_block in iterator._play.handlers:
for listening_handler in listening_handler_block.block:
listeners = getattr(listening_handler, 'listen', []) or []
if not listeners:
continue
listeners = listening_handler.get_validated_value(
'listen', listening_handler._valid_attrs['listen'], listeners, handler_templar
)
if handler_name not in listeners:
continue
else:
found = True
if listening_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', listening_handler, original_host)
# and if none were found, then we raise an error
if not found:
msg = ("The requested handler '%s' was not found in either the main handlers list nor in the listening "
"handlers list" % handler_name)
if C.ERROR_ON_MISSING_HANDLER:
raise AnsibleError(msg)
else:
display.warning(msg)
if 'add_host' in result_item:
# this task added a new host (add_host module)
new_host_info = result_item.get('add_host', dict())
self._add_host(new_host_info, iterator)
elif 'add_group' in result_item:
# this task added a new group (group_by module)
self._add_group(original_host, result_item)
if 'ansible_facts' in result_item:
# if delegated fact and we are delegating facts, we need to change target host for them
if original_task.delegate_to is not None and original_task.delegate_facts:
host_list = self.get_delegated_hosts(result_item, original_task)
else:
# Set facts that should always be on the delegated hosts
self._set_always_delegated_facts(result_item, original_task)
host_list = self.get_task_hosts(iterator, original_host, original_task)
if original_task.action == 'include_vars':
for (var_name, var_value) in iteritems(result_item['ansible_facts']):
# find the host we're actually referring too here, which may
# be a host that is not really in inventory at all
for target_host in host_list:
self._variable_manager.set_host_variable(target_host, var_name, var_value)
else:
cacheable = result_item.pop('_ansible_facts_cacheable', False)
for target_host in host_list:
# so set_fact is a misnomer but 'cacheable = true' was meant to create an 'actual fact'
# to avoid issues with precedence and confusion with set_fact normal operation,
# we set BOTH fact and nonpersistent_facts (aka hostvar)
# when fact is retrieved from cache in subsequent operations it will have the lower precedence,
# but for playbook setting it the 'higher' precedence is kept
if original_task.action != 'set_fact' or cacheable:
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
if original_task.action == 'set_fact':
self._variable_manager.set_nonpersistent_facts(target_host, result_item['ansible_facts'].copy())
if 'ansible_stats' in result_item and 'data' in result_item['ansible_stats'] and result_item['ansible_stats']['data']:
if 'per_host' not in result_item['ansible_stats'] or result_item['ansible_stats']['per_host']:
host_list = self.get_task_hosts(iterator, original_host, original_task)
else:
host_list = [None]
data = result_item['ansible_stats']['data']
aggregate = 'aggregate' in result_item['ansible_stats'] and result_item['ansible_stats']['aggregate']
for myhost in host_list:
for k in data.keys():
if aggregate:
self._tqm._stats.update_custom_stats(k, data[k], myhost)
else:
self._tqm._stats.set_custom_stats(k, data[k], myhost)
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
if not isinstance(original_task, TaskInclude):
self._tqm._stats.increment('ok', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
# finally, send the ok for this task
self._tqm.send_callback('v2_runner_on_ok', task_result)
self._pending_results -= 1
if original_host.name in self._blocked_hosts:
del self._blocked_hosts[original_host.name]
# If this is a role task, mark the parent role as being run (if
# the task was ok or failed, but not skipped or unreachable)
if original_task._role is not None and role_ran: # TODO: and original_task.action != 'include_role':?
# lookup the role in the ROLE_CACHE to make sure we're dealing
# with the correct object and mark it as executed
for (entry, role_obj) in iteritems(iterator._play.ROLE_CACHE[original_task._role.get_name()]):
if role_obj._uuid == original_task._role._uuid:
role_obj._had_task_run[original_host.name] = True
ret_results.append(task_result)
if one_pass or max_passes is not None and (cur_pass + 1) >= max_passes:
break
cur_pass += 1
return ret_results
def _wait_on_handler_results(self, iterator, handler, notified_hosts):
'''
Wait for the handler tasks to complete, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
handler_results = 0
display.debug("waiting for handler results...")
while (self._pending_results > 0 and
handler_results < len(notified_hosts) and
not self._tqm._terminated):
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator)
ret_results.extend(results)
handler_results += len([
r._host for r in results if r._host in notified_hosts and
r.task_name == handler.name])
if self._pending_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending handlers, returning what we have")
return ret_results
def _wait_on_pending_results(self, iterator):
'''
Wait for the shared counter to drop to zero, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
display.debug("waiting for pending results...")
while self._pending_results > 0 and not self._tqm._terminated:
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator)
ret_results.extend(results)
if self._pending_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending results, returning what we have")
return ret_results
def _add_host(self, host_info, iterator):
'''
Helper function to add a new host to inventory based on a task result.
'''
if host_info:
host_name = host_info.get('host_name')
# Check if host in inventory, add if not
if host_name not in self._inventory.hosts:
self._inventory.add_host(host_name, 'all')
self._hosts_cache_all.append(host_name)
new_host = self._inventory.hosts.get(host_name)
# Set/update the vars for this host
new_host.vars = combine_vars(new_host.get_vars(), host_info.get('host_vars', dict()))
new_groups = host_info.get('groups', [])
for group_name in new_groups:
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
new_group = self._inventory.groups[group_name]
new_group.add_host(self._inventory.hosts[host_name])
# reconcile inventory, ensures inventory rules are followed
self._inventory.reconcile_inventory()
def _add_group(self, host, result_item):
'''
Helper function to add a group (if it does not exist), and to assign the
specified host to that group.
'''
changed = False
# the host here is from the executor side, which means it was a
# serialized/cloned copy and we'll need to look up the proper
# host object from the master inventory
real_host = self._inventory.hosts.get(host.name)
if real_host is None:
if host.name == self._inventory.localhost.name:
real_host = self._inventory.localhost
else:
raise AnsibleError('%s cannot be matched in inventory' % host.name)
group_name = result_item.get('add_group')
parent_group_names = result_item.get('parent_groups', [])
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
for name in parent_group_names:
if name not in self._inventory.groups:
# create the new group and add it to inventory
self._inventory.add_group(name)
changed = True
group = self._inventory.groups[group_name]
for parent_group_name in parent_group_names:
parent_group = self._inventory.groups[parent_group_name]
parent_group.add_child_group(group)
if real_host.name not in group.get_hosts():
group.add_host(real_host)
changed = True
if group_name not in host.get_groups():
real_host.add_group(group)
changed = True
if changed:
self._inventory.reconcile_inventory()
return changed
def _copy_included_file(self, included_file):
'''
A proven safe and performant way to create a copy of an included file
'''
ti_copy = included_file._task.copy(exclude_parent=True)
ti_copy._parent = included_file._task._parent
temp_vars = ti_copy.vars.copy()
temp_vars.update(included_file._vars)
ti_copy.vars = temp_vars
return ti_copy
def _load_included_file(self, included_file, iterator, is_handler=False):
'''
Loads an included YAML file of tasks, applying the optional set of variables.
'''
display.debug("loading included file: %s" % included_file._filename)
try:
data = self._loader.load_from_file(included_file._filename)
if data is None:
return []
elif not isinstance(data, list):
raise AnsibleError("included task files must contain a list of tasks")
ti_copy = self._copy_included_file(included_file)
# pop tags out of the include args, if they were specified there, and assign
# them to the include. If the include already had tags specified, we raise an
# error so that users know not to specify them both ways
tags = included_file._task.vars.pop('tags', [])
if isinstance(tags, string_types):
tags = tags.split(',')
if len(tags) > 0:
if len(included_file._task.tags) > 0:
raise AnsibleParserError("Include tasks should not specify tags in more than one way (both via args and directly on the task). "
"Mixing tag specify styles is prohibited for whole import hierarchy, not only for single import statement",
obj=included_file._task._ds)
display.deprecated("You should not specify tags in the include parameters. All tags should be specified using the task-level option",
version='2.12')
included_file._task.tags = tags
block_list = load_list_of_blocks(
data,
play=iterator._play,
parent_block=ti_copy.build_parent_block(),
role=included_file._task._role,
use_handlers=is_handler,
loader=self._loader,
variable_manager=self._variable_manager,
)
# since we skip incrementing the stats when the task result is
# first processed, we do so now for each host in the list
for host in included_file._hosts:
self._tqm._stats.increment('ok', host.name)
except AnsibleError as e:
if isinstance(e, AnsibleFileNotFound):
reason = "Could not find or access '%s' on the Ansible Controller." % to_text(e.file_name)
else:
reason = to_text(e)
# mark all of the hosts including this file as failed, send callbacks,
# and increment the stats for this host
for host in included_file._hosts:
tr = TaskResult(host=host, task=included_file._task, return_data=dict(failed=True, reason=reason))
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
self._tqm._stats.increment('failures', host.name)
self._tqm.send_callback('v2_runner_on_failed', tr)
return []
# finally, send the callback and return the list of blocks loaded
self._tqm.send_callback('v2_playbook_on_include', included_file)
display.debug("done processing included file")
return block_list
def run_handlers(self, iterator, play_context):
'''
Runs handlers on those hosts which have been notified.
'''
result = self._tqm.RUN_OK
for handler_block in iterator._play.handlers:
# FIXME: handlers need to support the rescue/always portions of blocks too,
# but this may take some work in the iterator and gets tricky when
# we consider the ability of meta tasks to flush handlers
for handler in handler_block.block:
if handler.notified_hosts:
result = self._do_handler_run(handler, handler.get_name(), iterator=iterator, play_context=play_context)
if not result:
break
return result
def _do_handler_run(self, handler, handler_name, iterator, play_context, notified_hosts=None):
# FIXME: need to use iterator.get_failed_hosts() instead?
# if not len(self.get_hosts_remaining(iterator._play)):
# self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
# result = False
# break
if notified_hosts is None:
notified_hosts = handler.notified_hosts[:]
# strategy plugins that filter hosts need access to the iterator to identify failed hosts
failed_hosts = self._filter_notified_failed_hosts(iterator, notified_hosts)
notified_hosts = self._filter_notified_hosts(notified_hosts)
notified_hosts += failed_hosts
if len(notified_hosts) > 0:
saved_name = handler.name
handler.name = handler_name
self._tqm.send_callback('v2_playbook_on_handler_task_start', handler)
handler.name = saved_name
bypass_host_loop = False
try:
action = plugin_loader.action_loader.get(handler.action, class_only=True)
if getattr(action, 'BYPASS_HOST_LOOP', False):
bypass_host_loop = True
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
pass
host_results = []
for host in notified_hosts:
if not iterator.is_failed(host) or iterator._play.force_handlers:
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=handler,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
if not handler.cached_name:
handler.name = templar.template(handler.name)
handler.cached_name = True
self._queue_task(host, handler, task_vars, play_context)
if templar.template(handler.run_once) or bypass_host_loop:
break
# collect the results from the handler run
host_results = self._wait_on_handler_results(iterator, handler, notified_hosts)
included_files = IncludedFile.process_include_results(
host_results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
result = True
if len(included_files) > 0:
for included_file in included_files:
try:
new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=True)
# for every task in each block brought in by the include, add the list
# of hosts which included the file to the notified_handlers dict
for block in new_blocks:
iterator._play.handlers.append(block)
for task in block.block:
task_name = task.get_name()
display.debug("adding task '%s' included in handler '%s'" % (task_name, handler_name))
task.notified_hosts = included_file._hosts[:]
result = self._do_handler_run(
handler=task,
handler_name=task_name,
iterator=iterator,
play_context=play_context,
notified_hosts=included_file._hosts[:],
)
if not result:
break
except AnsibleError as e:
for host in included_file._hosts:
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
display.warning(to_text(e))
continue
# remove hosts from notification list
handler.notified_hosts = [
h for h in handler.notified_hosts
if h not in notified_hosts]
display.debug("done running handlers, result is: %s" % result)
return result
def _filter_notified_failed_hosts(self, iterator, notified_hosts):
return []
def _filter_notified_hosts(self, notified_hosts):
'''
Filter notified hosts accordingly to strategy
'''
# As main strategy is linear, we do not filter hosts
# We return a copy to avoid race conditions
return notified_hosts[:]
def _take_step(self, task, host=None):
ret = False
msg = u'Perform task: %s ' % task
if host:
msg += u'on %s ' % host
msg += u'(N)o/(y)es/(c)ontinue: '
resp = display.prompt(msg)
if resp.lower() in ['y', 'yes']:
display.debug("User ran task")
ret = True
elif resp.lower() in ['c', 'continue']:
display.debug("User ran task and canceled step mode")
self._step = False
ret = True
else:
display.debug("User skipped task")
display.banner(msg)
return ret
def _cond_not_supported_warn(self, task_name):
display.warning("%s task does not support when conditional" % task_name)
def _execute_meta(self, task, play_context, iterator, target_host):
# meta tasks store their args in the _raw_params field of args,
# since they do not use k=v pairs, so get that
meta_action = task.args.get('_raw_params')
def _evaluate_conditional(h):
all_vars = self._variable_manager.get_vars(play=iterator._play, host=h, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
return task.evaluate_conditional(templar, all_vars)
skipped = False
msg = ''
if meta_action == 'noop':
# FIXME: issue a callback for the noop here?
if task.when:
self._cond_not_supported_warn(meta_action)
msg = "noop"
elif meta_action == 'flush_handlers':
if task.when:
self._cond_not_supported_warn(meta_action)
self._flushed_hosts[target_host] = True
self.run_handlers(iterator, play_context)
self._flushed_hosts[target_host] = False
msg = "ran handlers"
elif meta_action == 'refresh_inventory' or self.flush_cache:
if task.when:
self._cond_not_supported_warn(meta_action)
self._inventory.refresh_inventory()
self._set_hosts_cache(iterator._play)
msg = "inventory successfully refreshed"
elif meta_action == 'clear_facts':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
hostname = host.get_name()
self._variable_manager.clear_facts(hostname)
msg = "facts cleared"
else:
skipped = True
elif meta_action == 'clear_host_errors':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
self._tqm._failed_hosts.pop(host.name, False)
self._tqm._unreachable_hosts.pop(host.name, False)
iterator._host_states[host.name].fail_state = iterator.FAILED_NONE
msg = "cleared host errors"
else:
skipped = True
elif meta_action == 'end_play':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator._host_states[host.name].run_state = iterator.ITERATING_COMPLETE
msg = "ending play"
elif meta_action == 'end_host':
if _evaluate_conditional(target_host):
iterator._host_states[target_host.name].run_state = iterator.ITERATING_COMPLETE
iterator._play._removed_hosts.append(target_host.name)
msg = "ending play for %s" % target_host.name
else:
skipped = True
msg = "end_host conditional evaluated to false, continuing execution for %s" % target_host.name
elif meta_action == 'reset_connection':
all_vars = self._variable_manager.get_vars(play=iterator._play, host=target_host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
play_context = play_context.set_task_and_variable_override(task=task, variables=all_vars, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not play_context.remote_addr:
play_context.remote_addr = target_host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist.
play_context.update_vars(all_vars)
if task.when:
self._cond_not_supported_warn(meta_action)
if target_host in self._active_connections:
connection = Connection(self._active_connections[target_host])
del self._active_connections[target_host]
else:
connection = plugin_loader.connection_loader.get(play_context.connection, play_context, os.devnull)
play_context.set_attributes_from_plugin(connection)
if connection:
try:
connection.reset()
msg = 'reset connection'
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
else:
msg = 'no connection, nothing to reset'
else:
raise AnsibleError("invalid meta action requested: %s" % meta_action, obj=task._ds)
result = {'msg': msg}
if skipped:
result['skipped'] = True
else:
result['changed'] = False
display.vv("META: %s" % msg)
return [TaskResult(target_host, task, result)]
def get_hosts_left(self, iterator):
''' returns list of available hosts for this iterator by filtering out unreachables '''
hosts_left = []
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
hosts_left.append(self._inventory.hosts[host])
except KeyError:
hosts_left.append(self._inventory.get_host(host))
return hosts_left
def update_active_connections(self, results):
''' updates the current active persistent connections '''
for r in results:
if 'args' in r._task_fields:
socket_path = r._task_fields['args'].get('_ansible_socket')
if socket_path:
if r._host not in self._active_connections:
self._active_connections[r._host] = socket_path
class NextAction(object):
""" The next action after an interpreter's exit. """
REDO = 1
CONTINUE = 2
EXIT = 3
def __init__(self, result=EXIT):
self.result = result
class Debugger(cmd.Cmd):
prompt_continuous = '> ' # multiple lines
def __init__(self, task, host, task_vars, play_context, result, next_action):
# cmd.Cmd is old-style class
cmd.Cmd.__init__(self)
self.prompt = '[%s] %s (debug)> ' % (host, task)
self.intro = None
self.scope = {}
self.scope['task'] = task
self.scope['task_vars'] = task_vars
self.scope['host'] = host
self.scope['play_context'] = play_context
self.scope['result'] = result
self.next_action = next_action
def cmdloop(self):
try:
cmd.Cmd.cmdloop(self)
except KeyboardInterrupt:
pass
do_h = cmd.Cmd.do_help
def do_EOF(self, args):
"""Quit"""
return self.do_quit(args)
def do_quit(self, args):
"""Quit"""
display.display('User interrupted execution')
self.next_action.result = NextAction.EXIT
return True
do_q = do_quit
def do_continue(self, args):
"""Continue to next result"""
self.next_action.result = NextAction.CONTINUE
return True
do_c = do_continue
def do_redo(self, args):
"""Schedule task for re-execution. The re-execution may not be the next result"""
self.next_action.result = NextAction.REDO
return True
do_r = do_redo
def do_update_task(self, args):
"""Recreate the task from ``task._ds``, and template with updated ``task_vars``"""
templar = Templar(None, shared_loader_obj=None, variables=self.scope['task_vars'])
task = self.scope['task']
task = task.load_data(task._ds)
task.post_validate(templar)
self.scope['task'] = task
do_u = do_update_task
def evaluate(self, args):
try:
return eval(args, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def do_pprint(self, args):
"""Pretty Print"""
try:
result = self.evaluate(args)
display.display(pprint.pformat(result))
except Exception:
pass
do_p = do_pprint
def execute(self, args):
try:
code = compile(args + '\n', '<stdin>', 'single')
exec(code, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def default(self, line):
try:
self.execute(line)
except Exception:
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,457 |
Using the free strategy with a mix of tasks and roles with handlers can lead to failure
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
If you use the free strategy with a complex playbook that has a mix of regular tasks, roles and handlers, some nodes may fail because the strategy attempts to load ansible roles as files when handlers are run.
I tracked it down to the _do_handler_run in the lib/ansible/plugins/strategy/__init__.py because included_files are not check to see if they are a role before being included.
https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/strategy/free.py#L240-L259
vs
https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/strategy/__init__.py#L1005-L1031
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
free.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/centos/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
CentOS8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
TBD.
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
No error because it correctly loads the role
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
fatal: [overcloud-novacompute-0]: FAILED! => {
"reason": "Could not find or access '/home/centos/config-download/overcloud/tripleo_timezone' on the Ansible Controller. Traceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/ansible/plugins/strategy/__init__.py\", line 869, in _load_included_file\n data = self._loader.load_from_file(included_file._filename)\n File \"/usr/lib/python3.6/site-packages/ansible/parsing/dataloader.py\", line 94, in load_from_file\n (b_file_data, show_content) = self._get_file_contents(file_name)\n File \"/usr/lib/python3.6/site-packages/ansible/parsing/dataloader.py\", line 162, in _get_file_contents\n raise AnsibleFileNotFound(\"Unable to retrieve file contents\", file_name=file_name)\nansible.errors.AnsibleFileNotFound: Unable to retrieve file contents\nCould not find or access '/home/centos/config-download/overcloud/tripleo_timezone' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option\n"
}
```
|
https://github.com/ansible/ansible/issues/69457
|
https://github.com/ansible/ansible/pull/69498
|
cd8920af998e297a549a4f05cf4a4b3656d7d67e
|
a4072ad0e9c718b6946d599ba05c8a67e26a8195
| 2020-05-12T16:16:27Z |
python
| 2020-05-21T20:55:08Z |
test/integration/targets/handler_race/aliases
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,457 |
Using the free strategy with a mix of tasks and roles with handlers can lead to failure
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
If you use the free strategy with a complex playbook that has a mix of regular tasks, roles and handlers, some nodes may fail because the strategy attempts to load ansible roles as files when handlers are run.
I tracked it down to the _do_handler_run in the lib/ansible/plugins/strategy/__init__.py because included_files are not check to see if they are a role before being included.
https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/strategy/free.py#L240-L259
vs
https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/strategy/__init__.py#L1005-L1031
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
free.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/centos/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
CentOS8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
TBD.
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
No error because it correctly loads the role
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
fatal: [overcloud-novacompute-0]: FAILED! => {
"reason": "Could not find or access '/home/centos/config-download/overcloud/tripleo_timezone' on the Ansible Controller. Traceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/ansible/plugins/strategy/__init__.py\", line 869, in _load_included_file\n data = self._loader.load_from_file(included_file._filename)\n File \"/usr/lib/python3.6/site-packages/ansible/parsing/dataloader.py\", line 94, in load_from_file\n (b_file_data, show_content) = self._get_file_contents(file_name)\n File \"/usr/lib/python3.6/site-packages/ansible/parsing/dataloader.py\", line 162, in _get_file_contents\n raise AnsibleFileNotFound(\"Unable to retrieve file contents\", file_name=file_name)\nansible.errors.AnsibleFileNotFound: Unable to retrieve file contents\nCould not find or access '/home/centos/config-download/overcloud/tripleo_timezone' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option\n"
}
```
|
https://github.com/ansible/ansible/issues/69457
|
https://github.com/ansible/ansible/pull/69498
|
cd8920af998e297a549a4f05cf4a4b3656d7d67e
|
a4072ad0e9c718b6946d599ba05c8a67e26a8195
| 2020-05-12T16:16:27Z |
python
| 2020-05-21T20:55:08Z |
test/integration/targets/handler_race/inventory
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,457 |
Using the free strategy with a mix of tasks and roles with handlers can lead to failure
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
If you use the free strategy with a complex playbook that has a mix of regular tasks, roles and handlers, some nodes may fail because the strategy attempts to load ansible roles as files when handlers are run.
I tracked it down to the _do_handler_run in the lib/ansible/plugins/strategy/__init__.py because included_files are not check to see if they are a role before being included.
https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/strategy/free.py#L240-L259
vs
https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/strategy/__init__.py#L1005-L1031
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
free.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/centos/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
CentOS8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
TBD.
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
No error because it correctly loads the role
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
fatal: [overcloud-novacompute-0]: FAILED! => {
"reason": "Could not find or access '/home/centos/config-download/overcloud/tripleo_timezone' on the Ansible Controller. Traceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/ansible/plugins/strategy/__init__.py\", line 869, in _load_included_file\n data = self._loader.load_from_file(included_file._filename)\n File \"/usr/lib/python3.6/site-packages/ansible/parsing/dataloader.py\", line 94, in load_from_file\n (b_file_data, show_content) = self._get_file_contents(file_name)\n File \"/usr/lib/python3.6/site-packages/ansible/parsing/dataloader.py\", line 162, in _get_file_contents\n raise AnsibleFileNotFound(\"Unable to retrieve file contents\", file_name=file_name)\nansible.errors.AnsibleFileNotFound: Unable to retrieve file contents\nCould not find or access '/home/centos/config-download/overcloud/tripleo_timezone' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option\n"
}
```
|
https://github.com/ansible/ansible/issues/69457
|
https://github.com/ansible/ansible/pull/69498
|
cd8920af998e297a549a4f05cf4a4b3656d7d67e
|
a4072ad0e9c718b6946d599ba05c8a67e26a8195
| 2020-05-12T16:16:27Z |
python
| 2020-05-21T20:55:08Z |
test/integration/targets/handler_race/roles/do_handlers/handlers/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,457 |
Using the free strategy with a mix of tasks and roles with handlers can lead to failure
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
If you use the free strategy with a complex playbook that has a mix of regular tasks, roles and handlers, some nodes may fail because the strategy attempts to load ansible roles as files when handlers are run.
I tracked it down to the _do_handler_run in the lib/ansible/plugins/strategy/__init__.py because included_files are not check to see if they are a role before being included.
https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/strategy/free.py#L240-L259
vs
https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/strategy/__init__.py#L1005-L1031
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
free.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/centos/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
CentOS8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
TBD.
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
No error because it correctly loads the role
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
fatal: [overcloud-novacompute-0]: FAILED! => {
"reason": "Could not find or access '/home/centos/config-download/overcloud/tripleo_timezone' on the Ansible Controller. Traceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/ansible/plugins/strategy/__init__.py\", line 869, in _load_included_file\n data = self._loader.load_from_file(included_file._filename)\n File \"/usr/lib/python3.6/site-packages/ansible/parsing/dataloader.py\", line 94, in load_from_file\n (b_file_data, show_content) = self._get_file_contents(file_name)\n File \"/usr/lib/python3.6/site-packages/ansible/parsing/dataloader.py\", line 162, in _get_file_contents\n raise AnsibleFileNotFound(\"Unable to retrieve file contents\", file_name=file_name)\nansible.errors.AnsibleFileNotFound: Unable to retrieve file contents\nCould not find or access '/home/centos/config-download/overcloud/tripleo_timezone' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option\n"
}
```
|
https://github.com/ansible/ansible/issues/69457
|
https://github.com/ansible/ansible/pull/69498
|
cd8920af998e297a549a4f05cf4a4b3656d7d67e
|
a4072ad0e9c718b6946d599ba05c8a67e26a8195
| 2020-05-12T16:16:27Z |
python
| 2020-05-21T20:55:08Z |
test/integration/targets/handler_race/roles/do_handlers/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,457 |
Using the free strategy with a mix of tasks and roles with handlers can lead to failure
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
If you use the free strategy with a complex playbook that has a mix of regular tasks, roles and handlers, some nodes may fail because the strategy attempts to load ansible roles as files when handlers are run.
I tracked it down to the _do_handler_run in the lib/ansible/plugins/strategy/__init__.py because included_files are not check to see if they are a role before being included.
https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/strategy/free.py#L240-L259
vs
https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/strategy/__init__.py#L1005-L1031
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
free.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/centos/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
CentOS8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
TBD.
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
No error because it correctly loads the role
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
fatal: [overcloud-novacompute-0]: FAILED! => {
"reason": "Could not find or access '/home/centos/config-download/overcloud/tripleo_timezone' on the Ansible Controller. Traceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/ansible/plugins/strategy/__init__.py\", line 869, in _load_included_file\n data = self._loader.load_from_file(included_file._filename)\n File \"/usr/lib/python3.6/site-packages/ansible/parsing/dataloader.py\", line 94, in load_from_file\n (b_file_data, show_content) = self._get_file_contents(file_name)\n File \"/usr/lib/python3.6/site-packages/ansible/parsing/dataloader.py\", line 162, in _get_file_contents\n raise AnsibleFileNotFound(\"Unable to retrieve file contents\", file_name=file_name)\nansible.errors.AnsibleFileNotFound: Unable to retrieve file contents\nCould not find or access '/home/centos/config-download/overcloud/tripleo_timezone' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option\n"
}
```
|
https://github.com/ansible/ansible/issues/69457
|
https://github.com/ansible/ansible/pull/69498
|
cd8920af998e297a549a4f05cf4a4b3656d7d67e
|
a4072ad0e9c718b6946d599ba05c8a67e26a8195
| 2020-05-12T16:16:27Z |
python
| 2020-05-21T20:55:08Z |
test/integration/targets/handler_race/roles/more_sleep/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,457 |
Using the free strategy with a mix of tasks and roles with handlers can lead to failure
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
If you use the free strategy with a complex playbook that has a mix of regular tasks, roles and handlers, some nodes may fail because the strategy attempts to load ansible roles as files when handlers are run.
I tracked it down to the _do_handler_run in the lib/ansible/plugins/strategy/__init__.py because included_files are not check to see if they are a role before being included.
https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/strategy/free.py#L240-L259
vs
https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/strategy/__init__.py#L1005-L1031
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
free.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/centos/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
CentOS8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
TBD.
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
No error because it correctly loads the role
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
fatal: [overcloud-novacompute-0]: FAILED! => {
"reason": "Could not find or access '/home/centos/config-download/overcloud/tripleo_timezone' on the Ansible Controller. Traceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/ansible/plugins/strategy/__init__.py\", line 869, in _load_included_file\n data = self._loader.load_from_file(included_file._filename)\n File \"/usr/lib/python3.6/site-packages/ansible/parsing/dataloader.py\", line 94, in load_from_file\n (b_file_data, show_content) = self._get_file_contents(file_name)\n File \"/usr/lib/python3.6/site-packages/ansible/parsing/dataloader.py\", line 162, in _get_file_contents\n raise AnsibleFileNotFound(\"Unable to retrieve file contents\", file_name=file_name)\nansible.errors.AnsibleFileNotFound: Unable to retrieve file contents\nCould not find or access '/home/centos/config-download/overcloud/tripleo_timezone' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option\n"
}
```
|
https://github.com/ansible/ansible/issues/69457
|
https://github.com/ansible/ansible/pull/69498
|
cd8920af998e297a549a4f05cf4a4b3656d7d67e
|
a4072ad0e9c718b6946d599ba05c8a67e26a8195
| 2020-05-12T16:16:27Z |
python
| 2020-05-21T20:55:08Z |
test/integration/targets/handler_race/roles/random_sleep/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,457 |
Using the free strategy with a mix of tasks and roles with handlers can lead to failure
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
If you use the free strategy with a complex playbook that has a mix of regular tasks, roles and handlers, some nodes may fail because the strategy attempts to load ansible roles as files when handlers are run.
I tracked it down to the _do_handler_run in the lib/ansible/plugins/strategy/__init__.py because included_files are not check to see if they are a role before being included.
https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/strategy/free.py#L240-L259
vs
https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/strategy/__init__.py#L1005-L1031
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
free.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/centos/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
CentOS8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
TBD.
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
No error because it correctly loads the role
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
fatal: [overcloud-novacompute-0]: FAILED! => {
"reason": "Could not find or access '/home/centos/config-download/overcloud/tripleo_timezone' on the Ansible Controller. Traceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/ansible/plugins/strategy/__init__.py\", line 869, in _load_included_file\n data = self._loader.load_from_file(included_file._filename)\n File \"/usr/lib/python3.6/site-packages/ansible/parsing/dataloader.py\", line 94, in load_from_file\n (b_file_data, show_content) = self._get_file_contents(file_name)\n File \"/usr/lib/python3.6/site-packages/ansible/parsing/dataloader.py\", line 162, in _get_file_contents\n raise AnsibleFileNotFound(\"Unable to retrieve file contents\", file_name=file_name)\nansible.errors.AnsibleFileNotFound: Unable to retrieve file contents\nCould not find or access '/home/centos/config-download/overcloud/tripleo_timezone' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option\n"
}
```
|
https://github.com/ansible/ansible/issues/69457
|
https://github.com/ansible/ansible/pull/69498
|
cd8920af998e297a549a4f05cf4a4b3656d7d67e
|
a4072ad0e9c718b6946d599ba05c8a67e26a8195
| 2020-05-12T16:16:27Z |
python
| 2020-05-21T20:55:08Z |
test/integration/targets/handler_race/runme.sh
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,457 |
Using the free strategy with a mix of tasks and roles with handlers can lead to failure
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
If you use the free strategy with a complex playbook that has a mix of regular tasks, roles and handlers, some nodes may fail because the strategy attempts to load ansible roles as files when handlers are run.
I tracked it down to the _do_handler_run in the lib/ansible/plugins/strategy/__init__.py because included_files are not check to see if they are a role before being included.
https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/strategy/free.py#L240-L259
vs
https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/strategy/__init__.py#L1005-L1031
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
free.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/centos/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
CentOS8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
TBD.
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
No error because it correctly loads the role
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
fatal: [overcloud-novacompute-0]: FAILED! => {
"reason": "Could not find or access '/home/centos/config-download/overcloud/tripleo_timezone' on the Ansible Controller. Traceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/ansible/plugins/strategy/__init__.py\", line 869, in _load_included_file\n data = self._loader.load_from_file(included_file._filename)\n File \"/usr/lib/python3.6/site-packages/ansible/parsing/dataloader.py\", line 94, in load_from_file\n (b_file_data, show_content) = self._get_file_contents(file_name)\n File \"/usr/lib/python3.6/site-packages/ansible/parsing/dataloader.py\", line 162, in _get_file_contents\n raise AnsibleFileNotFound(\"Unable to retrieve file contents\", file_name=file_name)\nansible.errors.AnsibleFileNotFound: Unable to retrieve file contents\nCould not find or access '/home/centos/config-download/overcloud/tripleo_timezone' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option\n"
}
```
|
https://github.com/ansible/ansible/issues/69457
|
https://github.com/ansible/ansible/pull/69498
|
cd8920af998e297a549a4f05cf4a4b3656d7d67e
|
a4072ad0e9c718b6946d599ba05c8a67e26a8195
| 2020-05-12T16:16:27Z |
python
| 2020-05-21T20:55:08Z |
test/integration/targets/handler_race/test_handler_race.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,606 |
Discovered interpreter path not used on delegated hosts
|
##### SUMMARY
Delegating a task to another host that uses a different Python interpreter always fails. This was reported previously (#61002, #63180), but the fix (#64906) does not seem to have resolved the issue.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
discovered_interpreter_python
##### ANSIBLE VERSION
```
ansible 2.9.9
config file = None
configured module search path = ['/home/dhatch/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/dhatch/.local/lib/python3.8/site-packages/ansible
executable location = /home/dhatch/.local/bin/ansible
python version = 3.8.2 (default, Feb 28 2020, 00:00:00) [GCC 10.0.1 20200216 (Red Hat 10.0.1-0.8)]
```
##### CONFIGURATION
```
```
##### OS / ENVIRONMENT
Control machine: Ansible 2.9.9 on Fedora 32
Host A: CentOS 8
Host B (delegation target): CentOS 7
##### STEPS TO REPRODUCE
This playbook shows that delegating a task to the CentOS 7 machine does not work when the task host is CentOS 8:
```yaml
- hosts: all
tasks:
- meta: clear_facts
- hosts: all
gather_facts: true
tasks:
- debug:
var: discovered_interpreter_python
- command: 'true'
- hosts: centos-8-host
tasks:
- command: 'true'
delegate_to: centos-7-host
ignore_errors: true
- hosts: centos-8-host
tasks:
- command: 'true'
delegate_to: centos-7-host
delegate_facts: false
ignore_errors: true
```
##### EXPECTED RESULTS
The Python interpreter used on the delegation target should be the one discovered for that host, NOT the host of the task itself.
##### ACTUAL RESULTS
```paste below
PLAY [all] *************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-7-host]
ok: [centos-8-host]
PLAY [all] *************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-7-host]
ok: [centos-8-host]
TASK [debug] ***********************************************************************************************************************************
ok: [centos-8-host] => {
"discovered_interpreter_python": "/usr/libexec/platform-python"
}
ok: [centos-7-host] => {
"discovered_interpreter_python": "/usr/bin/python"
}
TASK [command] *********************************************************************************************************************************
changed: [centos-7-host]
changed: [centos-8-host]
PLAY [centos-8-host] *********************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-8-host]
TASK [command] *********************************************************************************************************************************
fatal: [centos-8-host -> centos-7-host]: FAILED! => {"changed": false, "module_stderr": "Shared connection to centos-7-host closed.\r\n", "module_stdout": "/bin/sh: /usr/libexec/platform-python: No such file or directory\r\n", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
...ignoring
PLAY [centos-8-host] *********************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-8-host]
TASK [command] *********************************************************************************************************************************
fatal: [centos-8-host -> centos-7-host]: FAILED! => {"changed": false, "module_stderr": "Shared connection to centos-7-host closed.\r\n", "module_stdout": "/bin/sh: /usr/libexec/platform-python: No such file or directory\r\n", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
...ignoring
PLAY RECAP *************************************************************************************************************************************
centos-8-host : ok=8 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=2
centos-7-host : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/69606
|
https://github.com/ansible/ansible/pull/69604
|
dc63b365011a583b9e9bcd60d1fad6fb10b553c7
|
de3f7c7739851852dec8ea99a76c353317270b70
| 2020-05-19T21:08:12Z |
python
| 2020-05-22T13:31:34Z |
changelogs/fragments/discovery_delegation_fix.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,606 |
Discovered interpreter path not used on delegated hosts
|
##### SUMMARY
Delegating a task to another host that uses a different Python interpreter always fails. This was reported previously (#61002, #63180), but the fix (#64906) does not seem to have resolved the issue.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
discovered_interpreter_python
##### ANSIBLE VERSION
```
ansible 2.9.9
config file = None
configured module search path = ['/home/dhatch/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/dhatch/.local/lib/python3.8/site-packages/ansible
executable location = /home/dhatch/.local/bin/ansible
python version = 3.8.2 (default, Feb 28 2020, 00:00:00) [GCC 10.0.1 20200216 (Red Hat 10.0.1-0.8)]
```
##### CONFIGURATION
```
```
##### OS / ENVIRONMENT
Control machine: Ansible 2.9.9 on Fedora 32
Host A: CentOS 8
Host B (delegation target): CentOS 7
##### STEPS TO REPRODUCE
This playbook shows that delegating a task to the CentOS 7 machine does not work when the task host is CentOS 8:
```yaml
- hosts: all
tasks:
- meta: clear_facts
- hosts: all
gather_facts: true
tasks:
- debug:
var: discovered_interpreter_python
- command: 'true'
- hosts: centos-8-host
tasks:
- command: 'true'
delegate_to: centos-7-host
ignore_errors: true
- hosts: centos-8-host
tasks:
- command: 'true'
delegate_to: centos-7-host
delegate_facts: false
ignore_errors: true
```
##### EXPECTED RESULTS
The Python interpreter used on the delegation target should be the one discovered for that host, NOT the host of the task itself.
##### ACTUAL RESULTS
```paste below
PLAY [all] *************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-7-host]
ok: [centos-8-host]
PLAY [all] *************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-7-host]
ok: [centos-8-host]
TASK [debug] ***********************************************************************************************************************************
ok: [centos-8-host] => {
"discovered_interpreter_python": "/usr/libexec/platform-python"
}
ok: [centos-7-host] => {
"discovered_interpreter_python": "/usr/bin/python"
}
TASK [command] *********************************************************************************************************************************
changed: [centos-7-host]
changed: [centos-8-host]
PLAY [centos-8-host] *********************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-8-host]
TASK [command] *********************************************************************************************************************************
fatal: [centos-8-host -> centos-7-host]: FAILED! => {"changed": false, "module_stderr": "Shared connection to centos-7-host closed.\r\n", "module_stdout": "/bin/sh: /usr/libexec/platform-python: No such file or directory\r\n", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
...ignoring
PLAY [centos-8-host] *********************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-8-host]
TASK [command] *********************************************************************************************************************************
fatal: [centos-8-host -> centos-7-host]: FAILED! => {"changed": false, "module_stderr": "Shared connection to centos-7-host closed.\r\n", "module_stdout": "/bin/sh: /usr/libexec/platform-python: No such file or directory\r\n", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
...ignoring
PLAY RECAP *************************************************************************************************************************************
centos-8-host : ok=8 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=2
centos-7-host : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/69606
|
https://github.com/ansible/ansible/pull/69604
|
dc63b365011a583b9e9bcd60d1fad6fb10b553c7
|
de3f7c7739851852dec8ea99a76c353317270b70
| 2020-05-19T21:08:12Z |
python
| 2020-05-22T13:31:34Z |
lib/ansible/plugins/action/__init__.py
|
# coding: utf-8
# Copyright: (c) 2012-2014, Michael DeHaan <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import base64
import json
import os
import random
import re
import stat
import tempfile
import time
from abc import ABCMeta, abstractmethod
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleConnectionFailure, AnsibleActionSkip, AnsibleActionFail
from ansible.executor.module_common import modify_module
from ansible.executor.interpreter_discovery import discover_interpreter, InterpreterDiscoveryRequiredError
from ansible.module_utils.common._collections_compat import Sequence
from ansible.module_utils.json_utils import _filter_non_json_lines
from ansible.module_utils.six import binary_type, string_types, text_type, iteritems, with_metaclass
from ansible.module_utils.six.moves import shlex_quote
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.parsing.utils.jsonify import jsonify
from ansible.release import __version__
from ansible.utils.collection_loader import resource_from_fqcr
from ansible.utils.display import Display
from ansible.utils.unsafe_proxy import wrap_var, AnsibleUnsafeText
from ansible.vars.clean import remove_internal_keys
display = Display()
class ActionBase(with_metaclass(ABCMeta, object)):
'''
This class is the base class for all action plugins, and defines
code common to all actions. The base class handles the connection
by putting/getting files and executing commands based on the current
action in use.
'''
# A set of valid arguments
_VALID_ARGS = frozenset([])
def __init__(self, task, connection, play_context, loader, templar, shared_loader_obj):
self._task = task
self._connection = connection
self._play_context = play_context
self._loader = loader
self._templar = templar
self._shared_loader_obj = shared_loader_obj
self._cleanup_remote_tmp = False
self._supports_check_mode = True
self._supports_async = False
# interpreter discovery state
self._discovered_interpreter_key = None
self._discovered_interpreter = False
self._discovery_deprecation_warnings = []
self._discovery_warnings = []
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
self._used_interpreter = None
@abstractmethod
def run(self, tmp=None, task_vars=None):
""" Action Plugins should implement this method to perform their
tasks. Everything else in this base class is a helper method for the
action plugin to do that.
:kwarg tmp: Deprecated parameter. This is no longer used. An action plugin that calls
another one and wants to use the same remote tmp for both should set
self._connection._shell.tmpdir rather than this parameter.
:kwarg task_vars: The variables (host vars, group vars, config vars,
etc) associated with this task.
:returns: dictionary of results from the module
Implementors of action modules may find the following variables especially useful:
* Module parameters. These are stored in self._task.args
"""
result = {}
if tmp is not None:
result['warning'] = ['ActionModule.run() no longer honors the tmp parameter. Action'
' plugins should set self._connection._shell.tmpdir to share'
' the tmpdir']
del tmp
if self._task.async_val and not self._supports_async:
raise AnsibleActionFail('async is not supported for this task.')
elif self._play_context.check_mode and not self._supports_check_mode:
raise AnsibleActionSkip('check mode is not supported for this task.')
elif self._task.async_val and self._play_context.check_mode:
raise AnsibleActionFail('check mode and async cannot be used on same task.')
# Error if invalid argument is passed
if self._VALID_ARGS:
task_opts = frozenset(self._task.args.keys())
bad_opts = task_opts.difference(self._VALID_ARGS)
if bad_opts:
raise AnsibleActionFail('Invalid options for %s: %s' % (self._task.action, ','.join(list(bad_opts))))
if self._connection._shell.tmpdir is None and self._early_needs_tmp_path():
self._make_tmp_path()
return result
def cleanup(self, force=False):
"""Method to perform a clean up at the end of an action plugin execution
By default this is designed to clean up the shell tmpdir, and is toggled based on whether
async is in use
Action plugins may override this if they deem necessary, but should still call this method
via super
"""
if force or not self._task.async_val:
self._remove_tmp_path(self._connection._shell.tmpdir)
def get_plugin_option(self, plugin, option, default=None):
"""Helper to get an option from a plugin without having to use
the try/except dance everywhere to set a default
"""
try:
return plugin.get_option(option)
except (AttributeError, KeyError):
return default
def get_become_option(self, option, default=None):
return self.get_plugin_option(self._connection.become, option, default=default)
def get_connection_option(self, option, default=None):
return self.get_plugin_option(self._connection, option, default=default)
def get_shell_option(self, option, default=None):
return self.get_plugin_option(self._connection._shell, option, default=default)
def _remote_file_exists(self, path):
cmd = self._connection._shell.exists(path)
result = self._low_level_execute_command(cmd=cmd, sudoable=True)
if result['rc'] == 0:
return True
return False
def _configure_module(self, module_name, module_args, task_vars=None):
'''
Handles the loading and templating of the module code through the
modify_module() function.
'''
if task_vars is None:
task_vars = dict()
# Search module path(s) for named module.
for mod_type in self._connection.module_implementation_preferences:
# Check to determine if PowerShell modules are supported, and apply
# some fixes (hacks) to module name + args.
if mod_type == '.ps1':
# FIXME: This should be temporary and moved to an exec subsystem plugin where we can define the mapping
# for each subsystem.
win_collection = 'ansible.windows'
# async_status, win_stat, win_file, win_copy, and win_ping are not just like their
# python counterparts but they are compatible enough for our
# internal usage
if module_name in ('stat', 'file', 'copy', 'ping') and self._task.action != module_name:
module_name = '%s.win_%s' % (win_collection, module_name)
elif module_name in ['async_status']:
module_name = '%s.%s' % (win_collection, module_name)
# Remove extra quotes surrounding path parameters before sending to module.
if resource_from_fqcr(module_name) in ['win_stat', 'win_file', 'win_copy', 'slurp'] and module_args and \
hasattr(self._connection._shell, '_unquote'):
for key in ('src', 'dest', 'path'):
if key in module_args:
module_args[key] = self._connection._shell._unquote(module_args[key])
module_path = self._shared_loader_obj.module_loader.find_plugin(module_name, mod_type, collection_list=self._task.collections)
if module_path:
break
else: # This is a for-else: http://bit.ly/1ElPkyg
raise AnsibleError("The module %s was not found in configured module paths" % (module_name))
# insert shared code and arguments into the module
final_environment = dict()
self._compute_environment_string(final_environment)
become_kwargs = {}
if self._connection.become:
become_kwargs['become'] = True
become_kwargs['become_method'] = self._connection.become.name
become_kwargs['become_user'] = self._connection.become.get_option('become_user',
playcontext=self._play_context)
become_kwargs['become_password'] = self._connection.become.get_option('become_pass',
playcontext=self._play_context)
become_kwargs['become_flags'] = self._connection.become.get_option('become_flags',
playcontext=self._play_context)
# modify_module will exit early if interpreter discovery is required; re-run after if necessary
for dummy in (1, 2):
try:
(module_data, module_style, module_shebang) = modify_module(module_name, module_path, module_args, self._templar,
task_vars=task_vars,
module_compression=self._play_context.module_compression,
async_timeout=self._task.async_val,
environment=final_environment,
**become_kwargs)
break
except InterpreterDiscoveryRequiredError as idre:
self._discovered_interpreter = AnsibleUnsafeText(discover_interpreter(
action=self,
interpreter_name=idre.interpreter_name,
discovery_mode=idre.discovery_mode,
task_vars=task_vars))
# update the local task_vars with the discovered interpreter (which might be None);
# we'll propagate back to the controller in the task result
discovered_key = 'discovered_interpreter_%s' % idre.interpreter_name
# store in local task_vars facts collection for the retry and any other usages in this worker
if task_vars.get('ansible_facts') is None:
task_vars['ansible_facts'] = {}
task_vars['ansible_facts'][discovered_key] = self._discovered_interpreter
# preserve this so _execute_module can propagate back to controller as a fact
self._discovered_interpreter_key = discovered_key
return (module_style, module_shebang, module_data, module_path)
def _compute_environment_string(self, raw_environment_out=None):
'''
Builds the environment string to be used when executing the remote task.
'''
final_environment = dict()
if self._task.environment is not None:
environments = self._task.environment
if not isinstance(environments, list):
environments = [environments]
# The order of environments matters to make sure we merge
# in the parent's values first so those in the block then
# task 'win' in precedence
for environment in environments:
if environment is None or len(environment) == 0:
continue
temp_environment = self._templar.template(environment)
if not isinstance(temp_environment, dict):
raise AnsibleError("environment must be a dictionary, received %s (%s)" % (temp_environment, type(temp_environment)))
# very deliberately using update here instead of combine_vars, as
# these environment settings should not need to merge sub-dicts
final_environment.update(temp_environment)
if len(final_environment) > 0:
final_environment = self._templar.template(final_environment)
if isinstance(raw_environment_out, dict):
raw_environment_out.clear()
raw_environment_out.update(final_environment)
return self._connection._shell.env_prefix(**final_environment)
def _early_needs_tmp_path(self):
'''
Determines if a tmp path should be created before the action is executed.
'''
return getattr(self, 'TRANSFERS_FILES', False)
def _is_pipelining_enabled(self, module_style, wrap_async=False):
'''
Determines if we are required and can do pipelining
'''
# any of these require a true
for condition in [
self._connection.has_pipelining,
self._play_context.pipelining or self._connection.always_pipeline_modules, # pipelining enabled for play or connection requires it (eg winrm)
module_style == "new", # old style modules do not support pipelining
not C.DEFAULT_KEEP_REMOTE_FILES, # user wants remote files
not wrap_async or self._connection.always_pipeline_modules, # async does not normally support pipelining unless it does (eg winrm)
(self._connection.become.name if self._connection.become else '') != 'su', # su does not work with pipelining,
# FIXME: we might need to make become_method exclusion a configurable list
]:
if not condition:
return False
return True
def _get_admin_users(self):
'''
Returns a list of admin users that are configured for the current shell
plugin
'''
return self.get_shell_option('admin_users', ['root'])
def _get_remote_user(self):
''' consistently get the 'remote_user' for the action plugin '''
# TODO: use 'current user running ansible' as fallback when moving away from play_context
# pwd.getpwuid(os.getuid()).pw_name
remote_user = None
try:
remote_user = self._connection.get_option('remote_user')
except KeyError:
# plugin does not have remote_user option, fallback to default and/play_context
remote_user = getattr(self._connection, 'default_user', None) or self._play_context.remote_user
except AttributeError:
# plugin does not use config system, fallback to old play_context
remote_user = self._play_context.remote_user
return remote_user
def _is_become_unprivileged(self):
'''
The user is not the same as the connection user and is not part of the
shell configured admin users
'''
# if we don't use become then we know we aren't switching to a
# different unprivileged user
if not self._connection.become:
return False
# if we use become and the user is not an admin (or same user) then
# we need to return become_unprivileged as True
admin_users = self._get_admin_users()
remote_user = self._get_remote_user()
become_user = self.get_become_option('become_user')
return bool(become_user and become_user not in admin_users + [remote_user])
def _make_tmp_path(self, remote_user=None):
'''
Create and return a temporary path on a remote box.
'''
# Network connection plugins (network_cli, netconf, etc.) execute on the controller, rather than the remote host.
# As such, we want to avoid using remote_user for paths as remote_user may not line up with the local user
# This is a hack and should be solved by more intelligent handling of remote_tmp in 2.7
if getattr(self._connection, '_remote_is_local', False):
tmpdir = C.DEFAULT_LOCAL_TMP
else:
# NOTE: shell plugins should populate this setting anyways, but they dont do remote expansion, which
# we need for 'non posix' systems like cloud-init and solaris
tmpdir = self._remote_expand_user(self.get_shell_option('remote_tmp', default='~/.ansible/tmp'), sudoable=False)
become_unprivileged = self._is_become_unprivileged()
basefile = self._connection._shell._generate_temp_dir_name()
cmd = self._connection._shell.mkdtemp(basefile=basefile, system=become_unprivileged, tmpdir=tmpdir)
result = self._low_level_execute_command(cmd, sudoable=False)
# error handling on this seems a little aggressive?
if result['rc'] != 0:
if result['rc'] == 5:
output = 'Authentication failure.'
elif result['rc'] == 255 and self._connection.transport in ('ssh',):
if self._play_context.verbosity > 3:
output = u'SSH encountered an unknown error. The output was:\n%s%s' % (result['stdout'], result['stderr'])
else:
output = (u'SSH encountered an unknown error during the connection. '
'We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue')
elif u'No space left on device' in result['stderr']:
output = result['stderr']
else:
output = ('Failed to create temporary directory.'
'In some cases, you may have been able to authenticate and did not have permissions on the target directory. '
'Consider changing the remote tmp path in ansible.cfg to a path rooted in "/tmp", for more error information use -vvv. '
'Failed command was: %s, exited with result %d' % (cmd, result['rc']))
if 'stdout' in result and result['stdout'] != u'':
output = output + u", stdout output: %s" % result['stdout']
if self._play_context.verbosity > 3 and 'stderr' in result and result['stderr'] != u'':
output += u", stderr output: %s" % result['stderr']
raise AnsibleConnectionFailure(output)
else:
self._cleanup_remote_tmp = True
try:
stdout_parts = result['stdout'].strip().split('%s=' % basefile, 1)
rc = self._connection._shell.join_path(stdout_parts[-1], u'').splitlines()[-1]
except IndexError:
# stdout was empty or just space, set to / to trigger error in next if
rc = '/'
# Catch failure conditions, files should never be
# written to locations in /.
if rc == '/':
raise AnsibleError('failed to resolve remote temporary directory from %s: `%s` returned empty string' % (basefile, cmd))
self._connection._shell.tmpdir = rc
return rc
def _should_remove_tmp_path(self, tmp_path):
'''Determine if temporary path should be deleted or kept by user request/config'''
return tmp_path and self._cleanup_remote_tmp and not C.DEFAULT_KEEP_REMOTE_FILES and "-tmp-" in tmp_path
def _remove_tmp_path(self, tmp_path):
'''Remove a temporary path we created. '''
if tmp_path is None and self._connection._shell.tmpdir:
tmp_path = self._connection._shell.tmpdir
if self._should_remove_tmp_path(tmp_path):
cmd = self._connection._shell.remove(tmp_path, recurse=True)
# If we have gotten here we have a working ssh configuration.
# If ssh breaks we could leave tmp directories out on the remote system.
tmp_rm_res = self._low_level_execute_command(cmd, sudoable=False)
if tmp_rm_res.get('rc', 0) != 0:
display.warning('Error deleting remote temporary files (rc: %s, stderr: %s})'
% (tmp_rm_res.get('rc'), tmp_rm_res.get('stderr', 'No error string available.')))
else:
self._connection._shell.tmpdir = None
def _transfer_file(self, local_path, remote_path):
"""
Copy a file from the controller to a remote path
:arg local_path: Path on controller to transfer
:arg remote_path: Path on the remote system to transfer into
.. warning::
* When you use this function you likely want to use use fixup_perms2() on the
remote_path to make sure that the remote file is readable when the user becomes
a non-privileged user.
* If you use fixup_perms2() on the file and copy or move the file into place, you will
need to then remove filesystem acls on the file once it has been copied into place by
the module. See how the copy module implements this for help.
"""
self._connection.put_file(local_path, remote_path)
return remote_path
def _transfer_data(self, remote_path, data):
'''
Copies the module data out to the temporary module path.
'''
if isinstance(data, dict):
data = jsonify(data)
afd, afile = tempfile.mkstemp(dir=C.DEFAULT_LOCAL_TMP)
afo = os.fdopen(afd, 'wb')
try:
data = to_bytes(data, errors='surrogate_or_strict')
afo.write(data)
except Exception as e:
raise AnsibleError("failure writing module data to temporary file for transfer: %s" % to_native(e))
afo.flush()
afo.close()
try:
self._transfer_file(afile, remote_path)
finally:
os.unlink(afile)
return remote_path
def _fixup_perms2(self, remote_paths, remote_user=None, execute=True):
"""
We need the files we upload to be readable (and sometimes executable)
by the user being sudo'd to but we want to limit other people's access
(because the files could contain passwords or other private
information. We achieve this in one of these ways:
* If no sudo is performed or the remote_user is sudo'ing to
themselves, we don't have to change permissions.
* If the remote_user sudo's to a privileged user (for instance, root),
we don't have to change permissions
* If the remote_user sudo's to an unprivileged user then we attempt to
grant the unprivileged user access via file system acls.
* If granting file system acls fails we try to change the owner of the
file with chown which only works in case the remote_user is
privileged or the remote systems allows chown calls by unprivileged
users (e.g. HP-UX)
* If the chown fails we can set the file to be world readable so that
the second unprivileged user can read the file.
Since this could allow other users to get access to private
information we only do this if ansible is configured with
"allow_world_readable_tmpfiles" in the ansible.cfg
"""
if remote_user is None:
remote_user = self._get_remote_user()
if getattr(self._connection._shell, "_IS_WINDOWS", False):
# This won't work on Powershell as-is, so we'll just completely skip until
# we have a need for it, at which point we'll have to do something different.
return remote_paths
if self._is_become_unprivileged():
# Unprivileged user that's different than the ssh user. Let's get
# to work!
# Try to use file system acls to make the files readable for sudo'd
# user
if execute:
chmod_mode = 'rx'
setfacl_mode = 'r-x'
else:
chmod_mode = 'rX'
# NOTE: this form fails silently on freebsd. We currently
# never call _fixup_perms2() with execute=False but if we
# start to we'll have to fix this.
setfacl_mode = 'r-X'
res = self._remote_set_user_facl(remote_paths, self.get_become_option('become_user'), setfacl_mode)
if res['rc'] != 0:
# File system acls failed; let's try to use chown next
# Set executable bit first as on some systems an
# unprivileged user can use chown
if execute:
res = self._remote_chmod(remote_paths, 'u+x')
if res['rc'] != 0:
raise AnsibleError('Failed to set file mode on remote temporary files (rc: {0}, err: {1})'.format(res['rc'], to_native(res['stderr'])))
res = self._remote_chown(remote_paths, self.get_become_option('become_user'))
if res['rc'] != 0 and remote_user in self._get_admin_users():
# chown failed even if remote_user is administrator/root
raise AnsibleError('Failed to change ownership of the temporary files Ansible needs to create despite connecting as a privileged user. '
'Unprivileged become user would be unable to read the file.')
elif res['rc'] != 0:
if C.ALLOW_WORLD_READABLE_TMPFILES:
# chown and fs acls failed -- do things this insecure
# way only if the user opted in in the config file
display.warning('Using world-readable permissions for temporary files Ansible needs to create when becoming an unprivileged user. '
'This may be insecure. For information on securing this, see '
'https://docs.ansible.com/ansible/user_guide/become.html#risks-of-becoming-an-unprivileged-user')
res = self._remote_chmod(remote_paths, 'a+%s' % chmod_mode)
if res['rc'] != 0:
raise AnsibleError('Failed to set file mode on remote files (rc: {0}, err: {1})'.format(res['rc'], to_native(res['stderr'])))
else:
raise AnsibleError('Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user '
'(rc: %s, err: %s}). For information on working around this, see '
'https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user'
% (res['rc'], to_native(res['stderr'])))
elif execute:
# Can't depend on the file being transferred with execute permissions.
# Only need user perms because no become was used here
res = self._remote_chmod(remote_paths, 'u+x')
if res['rc'] != 0:
raise AnsibleError('Failed to set execute bit on remote files (rc: {0}, err: {1})'.format(res['rc'], to_native(res['stderr'])))
return remote_paths
def _remote_chmod(self, paths, mode, sudoable=False):
'''
Issue a remote chmod command
'''
cmd = self._connection._shell.chmod(paths, mode)
res = self._low_level_execute_command(cmd, sudoable=sudoable)
return res
def _remote_chown(self, paths, user, sudoable=False):
'''
Issue a remote chown command
'''
cmd = self._connection._shell.chown(paths, user)
res = self._low_level_execute_command(cmd, sudoable=sudoable)
return res
def _remote_set_user_facl(self, paths, user, mode, sudoable=False):
'''
Issue a remote call to setfacl
'''
cmd = self._connection._shell.set_user_facl(paths, user, mode)
res = self._low_level_execute_command(cmd, sudoable=sudoable)
return res
def _execute_remote_stat(self, path, all_vars, follow, tmp=None, checksum=True):
'''
Get information from remote file.
'''
if tmp is not None:
display.warning('_execute_remote_stat no longer honors the tmp parameter. Action'
' plugins should set self._connection._shell.tmpdir to share'
' the tmpdir')
del tmp # No longer used
module_args = dict(
path=path,
follow=follow,
get_checksum=checksum,
checksum_algorithm='sha1',
)
mystat = self._execute_module(module_name='stat', module_args=module_args, task_vars=all_vars,
wrap_async=False)
if mystat.get('failed'):
msg = mystat.get('module_stderr')
if not msg:
msg = mystat.get('module_stdout')
if not msg:
msg = mystat.get('msg')
raise AnsibleError('Failed to get information on remote file (%s): %s' % (path, msg))
if not mystat['stat']['exists']:
# empty might be matched, 1 should never match, also backwards compatible
mystat['stat']['checksum'] = '1'
# happens sometimes when it is a dir and not on bsd
if 'checksum' not in mystat['stat']:
mystat['stat']['checksum'] = ''
elif not isinstance(mystat['stat']['checksum'], string_types):
raise AnsibleError("Invalid checksum returned by stat: expected a string type but got %s" % type(mystat['stat']['checksum']))
return mystat['stat']
def _remote_checksum(self, path, all_vars, follow=False):
'''
Produces a remote checksum given a path,
Returns a number 0-4 for specific errors instead of checksum, also ensures it is different
0 = unknown error
1 = file does not exist, this might not be an error
2 = permissions issue
3 = its a directory, not a file
4 = stat module failed, likely due to not finding python
5 = appropriate json module not found
'''
x = "0" # unknown error has occurred
try:
remote_stat = self._execute_remote_stat(path, all_vars, follow=follow)
if remote_stat['exists'] and remote_stat['isdir']:
x = "3" # its a directory not a file
else:
x = remote_stat['checksum'] # if 1, file is missing
except AnsibleError as e:
errormsg = to_text(e)
if errormsg.endswith(u'Permission denied'):
x = "2" # cannot read file
elif errormsg.endswith(u'MODULE FAILURE'):
x = "4" # python not found or module uncaught exception
elif 'json' in errormsg:
x = "5" # json module needed
finally:
return x # pylint: disable=lost-exception
def _remote_expand_user(self, path, sudoable=True, pathsep=None):
''' takes a remote path and performs tilde/$HOME expansion on the remote host '''
# We only expand ~/path and ~username/path
if not path.startswith('~'):
return path
# Per Jborean, we don't have to worry about Windows as we don't have a notion of user's home
# dir there.
split_path = path.split(os.path.sep, 1)
expand_path = split_path[0]
if expand_path == '~':
# Network connection plugins (network_cli, netconf, etc.) execute on the controller, rather than the remote host.
# As such, we want to avoid using remote_user for paths as remote_user may not line up with the local user
# This is a hack and should be solved by more intelligent handling of remote_tmp in 2.7
become_user = self.get_become_option('become_user')
if getattr(self._connection, '_remote_is_local', False):
pass
elif sudoable and self._connection.become and become_user:
expand_path = '~%s' % become_user
else:
# use remote user instead, if none set default to current user
expand_path = '~%s' % (self._get_remote_user() or '')
# use shell to construct appropriate command and execute
cmd = self._connection._shell.expand_user(expand_path)
data = self._low_level_execute_command(cmd, sudoable=False)
try:
initial_fragment = data['stdout'].strip().splitlines()[-1]
except IndexError:
initial_fragment = None
if not initial_fragment:
# Something went wrong trying to expand the path remotely. Try using pwd, if not, return
# the original string
cmd = self._connection._shell.pwd()
pwd = self._low_level_execute_command(cmd, sudoable=False).get('stdout', '').strip()
if pwd:
expanded = pwd
else:
expanded = path
elif len(split_path) > 1:
expanded = self._connection._shell.join_path(initial_fragment, *split_path[1:])
else:
expanded = initial_fragment
if '..' in os.path.dirname(expanded).split('/'):
raise AnsibleError("'%s' returned an invalid relative home directory path containing '..'" % self._play_context.remote_addr)
return expanded
def _strip_success_message(self, data):
'''
Removes the BECOME-SUCCESS message from the data.
'''
if data.strip().startswith('BECOME-SUCCESS-'):
data = re.sub(r'^((\r)?\n)?BECOME-SUCCESS.*(\r)?\n', '', data)
return data
def _update_module_args(self, module_name, module_args, task_vars):
# set check mode in the module arguments, if required
if self._play_context.check_mode:
if not self._supports_check_mode:
raise AnsibleError("check mode is not supported for this operation")
module_args['_ansible_check_mode'] = True
else:
module_args['_ansible_check_mode'] = False
# set no log in the module arguments, if required
no_target_syslog = C.config.get_config_value('DEFAULT_NO_TARGET_SYSLOG', variables=task_vars)
module_args['_ansible_no_log'] = self._play_context.no_log or no_target_syslog
# set debug in the module arguments, if required
module_args['_ansible_debug'] = C.DEFAULT_DEBUG
# let module know we are in diff mode
module_args['_ansible_diff'] = self._play_context.diff
# let module know our verbosity
module_args['_ansible_verbosity'] = display.verbosity
# give the module information about the ansible version
module_args['_ansible_version'] = __version__
# give the module information about its name
module_args['_ansible_module_name'] = module_name
# set the syslog facility to be used in the module
module_args['_ansible_syslog_facility'] = task_vars.get('ansible_syslog_facility', C.DEFAULT_SYSLOG_FACILITY)
# let module know about filesystems that selinux treats specially
module_args['_ansible_selinux_special_fs'] = C.DEFAULT_SELINUX_SPECIAL_FS
# what to do when parameter values are converted to strings
module_args['_ansible_string_conversion_action'] = C.STRING_CONVERSION_ACTION
# give the module the socket for persistent connections
module_args['_ansible_socket'] = getattr(self._connection, 'socket_path')
if not module_args['_ansible_socket']:
module_args['_ansible_socket'] = task_vars.get('ansible_socket')
# make sure all commands use the designated shell executable
module_args['_ansible_shell_executable'] = self._play_context.executable
# make sure modules are aware if they need to keep the remote files
module_args['_ansible_keep_remote_files'] = C.DEFAULT_KEEP_REMOTE_FILES
# make sure all commands use the designated temporary directory if created
if self._is_become_unprivileged(): # force fallback on remote_tmp as user cannot normally write to dir
module_args['_ansible_tmpdir'] = None
else:
module_args['_ansible_tmpdir'] = self._connection._shell.tmpdir
# make sure the remote_tmp value is sent through in case modules needs to create their own
module_args['_ansible_remote_tmp'] = self.get_shell_option('remote_tmp', default='~/.ansible/tmp')
def _execute_module(self, module_name=None, module_args=None, tmp=None, task_vars=None, persist_files=False, delete_remote_tmp=None, wrap_async=False):
'''
Transfer and run a module along with its arguments.
'''
if tmp is not None:
display.warning('_execute_module no longer honors the tmp parameter. Action plugins'
' should set self._connection._shell.tmpdir to share the tmpdir')
del tmp # No longer used
if delete_remote_tmp is not None:
display.warning('_execute_module no longer honors the delete_remote_tmp parameter.'
' Action plugins should check self._connection._shell.tmpdir to'
' see if a tmpdir existed before they were called to determine'
' if they are responsible for removing it.')
del delete_remote_tmp # No longer used
tmpdir = self._connection._shell.tmpdir
# We set the module_style to new here so the remote_tmp is created
# before the module args are built if remote_tmp is needed (async).
# If the module_style turns out to not be new and we didn't create the
# remote tmp here, it will still be created. This must be done before
# calling self._update_module_args() so the module wrapper has the
# correct remote_tmp value set
if not self._is_pipelining_enabled("new", wrap_async) and tmpdir is None:
self._make_tmp_path()
tmpdir = self._connection._shell.tmpdir
if task_vars is None:
task_vars = dict()
# if a module name was not specified for this execution, use the action from the task
if module_name is None:
module_name = self._task.action
if module_args is None:
module_args = self._task.args
self._update_module_args(module_name, module_args, task_vars)
# FIXME: convert async_wrapper.py to not rely on environment variables
# make sure we get the right async_dir variable, backwards compatibility
# means we need to lookup the env value ANSIBLE_ASYNC_DIR first
remove_async_dir = None
if wrap_async or self._task.async_val:
env_async_dir = [e for e in self._task.environment if
"ANSIBLE_ASYNC_DIR" in e]
if len(env_async_dir) > 0:
msg = "Setting the async dir from the environment keyword " \
"ANSIBLE_ASYNC_DIR is deprecated. Set the async_dir " \
"shell option instead"
self._display.deprecated(msg, "2.12")
else:
# ANSIBLE_ASYNC_DIR is not set on the task, we get the value
# from the shell option and temporarily add to the environment
# list for async_wrapper to pick up
async_dir = self.get_shell_option('async_dir', default="~/.ansible_async")
remove_async_dir = len(self._task.environment)
self._task.environment.append({"ANSIBLE_ASYNC_DIR": async_dir})
# FUTURE: refactor this along with module build process to better encapsulate "smart wrapper" functionality
(module_style, shebang, module_data, module_path) = self._configure_module(module_name=module_name, module_args=module_args, task_vars=task_vars)
display.vvv("Using module file %s" % module_path)
if not shebang and module_style != 'binary':
raise AnsibleError("module (%s) is missing interpreter line" % module_name)
self._used_interpreter = shebang
remote_module_path = None
if not self._is_pipelining_enabled(module_style, wrap_async):
# we might need remote tmp dir
if tmpdir is None:
self._make_tmp_path()
tmpdir = self._connection._shell.tmpdir
remote_module_filename = self._connection._shell.get_remote_filename(module_path)
remote_module_path = self._connection._shell.join_path(tmpdir, 'AnsiballZ_%s' % remote_module_filename)
args_file_path = None
if module_style in ('old', 'non_native_want_json', 'binary'):
# we'll also need a tmp file to hold our module arguments
args_file_path = self._connection._shell.join_path(tmpdir, 'args')
if remote_module_path or module_style != 'new':
display.debug("transferring module to remote %s" % remote_module_path)
if module_style == 'binary':
self._transfer_file(module_path, remote_module_path)
else:
self._transfer_data(remote_module_path, module_data)
if module_style == 'old':
# we need to dump the module args to a k=v string in a file on
# the remote system, which can be read and parsed by the module
args_data = ""
for k, v in iteritems(module_args):
args_data += '%s=%s ' % (k, shlex_quote(text_type(v)))
self._transfer_data(args_file_path, args_data)
elif module_style in ('non_native_want_json', 'binary'):
self._transfer_data(args_file_path, json.dumps(module_args))
display.debug("done transferring module to remote")
environment_string = self._compute_environment_string()
# remove the ANSIBLE_ASYNC_DIR env entry if we added a temporary one for
# the async_wrapper task - this is so the async_status plugin doesn't
# fire a deprecation warning when it runs after this task
if remove_async_dir is not None:
del self._task.environment[remove_async_dir]
remote_files = []
if tmpdir and remote_module_path:
remote_files = [tmpdir, remote_module_path]
if args_file_path:
remote_files.append(args_file_path)
sudoable = True
in_data = None
cmd = ""
if wrap_async and not self._connection.always_pipeline_modules:
# configure, upload, and chmod the async_wrapper module
(async_module_style, shebang, async_module_data, async_module_path) = self._configure_module(module_name='async_wrapper', module_args=dict(),
task_vars=task_vars)
async_module_remote_filename = self._connection._shell.get_remote_filename(async_module_path)
remote_async_module_path = self._connection._shell.join_path(tmpdir, async_module_remote_filename)
self._transfer_data(remote_async_module_path, async_module_data)
remote_files.append(remote_async_module_path)
async_limit = self._task.async_val
async_jid = str(random.randint(0, 999999999999))
# call the interpreter for async_wrapper directly
# this permits use of a script for an interpreter on non-Linux platforms
# TODO: re-implement async_wrapper as a regular module to avoid this special case
interpreter = shebang.replace('#!', '').strip()
async_cmd = [interpreter, remote_async_module_path, async_jid, async_limit, remote_module_path]
if environment_string:
async_cmd.insert(0, environment_string)
if args_file_path:
async_cmd.append(args_file_path)
else:
# maintain a fixed number of positional parameters for async_wrapper
async_cmd.append('_')
if not self._should_remove_tmp_path(tmpdir):
async_cmd.append("-preserve_tmp")
cmd = " ".join(to_text(x) for x in async_cmd)
else:
if self._is_pipelining_enabled(module_style):
in_data = module_data
display.vvv("Pipelining is enabled.")
else:
cmd = remote_module_path
cmd = self._connection._shell.build_module_command(environment_string, shebang, cmd, arg_path=args_file_path).strip()
# Fix permissions of the tmpdir path and tmpdir files. This should be called after all
# files have been transferred.
if remote_files:
# remove none/empty
remote_files = [x for x in remote_files if x]
self._fixup_perms2(remote_files, self._get_remote_user())
# actually execute
res = self._low_level_execute_command(cmd, sudoable=sudoable, in_data=in_data)
# parse the main result
data = self._parse_returned_data(res)
# NOTE: INTERNAL KEYS ONLY ACCESSIBLE HERE
# get internal info before cleaning
if data.pop("_ansible_suppress_tmpdir_delete", False):
self._cleanup_remote_tmp = False
# NOTE: yum returns results .. but that made it 'compatible' with squashing, so we allow mappings, for now
if 'results' in data and (not isinstance(data['results'], Sequence) or isinstance(data['results'], string_types)):
data['ansible_module_results'] = data['results']
del data['results']
display.warning("Found internal 'results' key in module return, renamed to 'ansible_module_results'.")
# remove internal keys
remove_internal_keys(data)
if wrap_async:
# async_wrapper will clean up its tmpdir on its own so we want the controller side to
# forget about it now
self._connection._shell.tmpdir = None
# FIXME: for backwards compat, figure out if still makes sense
data['changed'] = True
# pre-split stdout/stderr into lines if needed
if 'stdout' in data and 'stdout_lines' not in data:
# if the value is 'False', a default won't catch it.
txt = data.get('stdout', None) or u''
data['stdout_lines'] = txt.splitlines()
if 'stderr' in data and 'stderr_lines' not in data:
# if the value is 'False', a default won't catch it.
txt = data.get('stderr', None) or u''
data['stderr_lines'] = txt.splitlines()
# propagate interpreter discovery results back to the controller
if self._discovered_interpreter_key:
if data.get('ansible_facts') is None:
data['ansible_facts'] = {}
data['ansible_facts'][self._discovered_interpreter_key] = self._discovered_interpreter
if self._discovery_warnings:
if data.get('warnings') is None:
data['warnings'] = []
data['warnings'].extend(self._discovery_warnings)
if self._discovery_deprecation_warnings:
if data.get('deprecations') is None:
data['deprecations'] = []
data['deprecations'].extend(self._discovery_deprecation_warnings)
# mark the entire module results untrusted as a template right here, since the current action could
# possibly template one of these values.
data = wrap_var(data)
display.debug("done with _execute_module (%s, %s)" % (module_name, module_args))
return data
def _parse_returned_data(self, res):
try:
filtered_output, warnings = _filter_non_json_lines(res.get('stdout', u''))
for w in warnings:
display.warning(w)
data = json.loads(filtered_output)
data['_ansible_parsed'] = True
except ValueError:
# not valid json, lets try to capture error
data = dict(failed=True, _ansible_parsed=False)
data['module_stdout'] = res.get('stdout', u'')
if 'stderr' in res:
data['module_stderr'] = res['stderr']
if res['stderr'].startswith(u'Traceback'):
data['exception'] = res['stderr']
# in some cases a traceback will arrive on stdout instead of stderr, such as when using ssh with -tt
if 'exception' not in data and data['module_stdout'].startswith(u'Traceback'):
data['exception'] = data['module_stdout']
# The default
data['msg'] = "MODULE FAILURE"
# try to figure out if we are missing interpreter
if self._used_interpreter is not None:
match = re.compile('%s: (?:No such file or directory|not found)' % self._used_interpreter.lstrip('!#'))
if match.search(data['module_stderr']) or match.search(data['module_stdout']):
data['msg'] = "The module failed to execute correctly, you probably need to set the interpreter."
# always append hint
data['msg'] += '\nSee stdout/stderr for the exact error'
if 'rc' in res:
data['rc'] = res['rc']
return data
# FIXME: move to connection base
def _low_level_execute_command(self, cmd, sudoable=True, in_data=None, executable=None, encoding_errors='surrogate_then_replace', chdir=None):
'''
This is the function which executes the low level shell command, which
may be commands to create/remove directories for temporary files, or to
run the module code or python directly when pipelining.
:kwarg encoding_errors: If the value returned by the command isn't
utf-8 then we have to figure out how to transform it to unicode.
If the value is just going to be displayed to the user (or
discarded) then the default of 'replace' is fine. If the data is
used as a key or is going to be written back out to a file
verbatim, then this won't work. May have to use some sort of
replacement strategy (python3 could use surrogateescape)
:kwarg chdir: cd into this directory before executing the command.
'''
display.debug("_low_level_execute_command(): starting")
# if not cmd:
# # this can happen with powershell modules when there is no analog to a Windows command (like chmod)
# display.debug("_low_level_execute_command(): no command, exiting")
# return dict(stdout='', stderr='', rc=254)
if chdir:
display.debug("_low_level_execute_command(): changing cwd to %s for this command" % chdir)
cmd = self._connection._shell.append_command('cd %s' % chdir, cmd)
# https://github.com/ansible/ansible/issues/68054
if executable:
self._connection._shell.executable = executable
ruser = self._get_remote_user()
buser = self.get_become_option('become_user')
if (sudoable and self._connection.become and # if sudoable and have become
resource_from_fqcr(self._connection.transport) != 'network_cli' and # if not using network_cli
(C.BECOME_ALLOW_SAME_USER or (buser != ruser or not any((ruser, buser))))): # if we allow same user PE or users are different and either is set
display.debug("_low_level_execute_command(): using become for this command")
cmd = self._connection.become.build_become_command(cmd, self._connection._shell)
if self._connection.allow_executable:
if executable is None:
executable = self._play_context.executable
# mitigation for SSH race which can drop stdout (https://github.com/ansible/ansible/issues/13876)
# only applied for the default executable to avoid interfering with the raw action
cmd = self._connection._shell.append_command(cmd, 'sleep 0')
if executable:
cmd = executable + ' -c ' + shlex_quote(cmd)
display.debug("_low_level_execute_command(): executing: %s" % (cmd,))
# Change directory to basedir of task for command execution when connection is local
if self._connection.transport == 'local':
self._connection.cwd = to_bytes(self._loader.get_basedir(), errors='surrogate_or_strict')
rc, stdout, stderr = self._connection.exec_command(cmd, in_data=in_data, sudoable=sudoable)
# stdout and stderr may be either a file-like or a bytes object.
# Convert either one to a text type
if isinstance(stdout, binary_type):
out = to_text(stdout, errors=encoding_errors)
elif not isinstance(stdout, text_type):
out = to_text(b''.join(stdout.readlines()), errors=encoding_errors)
else:
out = stdout
if isinstance(stderr, binary_type):
err = to_text(stderr, errors=encoding_errors)
elif not isinstance(stderr, text_type):
err = to_text(b''.join(stderr.readlines()), errors=encoding_errors)
else:
err = stderr
if rc is None:
rc = 0
# be sure to remove the BECOME-SUCCESS message now
out = self._strip_success_message(out)
display.debug(u"_low_level_execute_command() done: rc=%d, stdout=%s, stderr=%s" % (rc, out, err))
return dict(rc=rc, stdout=out, stdout_lines=out.splitlines(), stderr=err, stderr_lines=err.splitlines())
def _get_diff_data(self, destination, source, task_vars, source_file=True):
# Note: Since we do not diff the source and destination before we transform from bytes into
# text the diff between source and destination may not be accurate. To fix this, we'd need
# to move the diffing from the callback plugins into here.
#
# Example of data which would cause trouble is src_content == b'\xff' and dest_content ==
# b'\xfe'. Neither of those are valid utf-8 so both get turned into the replacement
# character: diff['before'] = u'�' ; diff['after'] = u'�' When the callback plugin later
# diffs before and after it shows an empty diff.
diff = {}
display.debug("Going to peek to see if file has changed permissions")
peek_result = self._execute_module(module_name='file', module_args=dict(path=destination, _diff_peek=True), task_vars=task_vars, persist_files=True)
if peek_result.get('failed', False):
display.warning(u"Failed to get diff between '%s' and '%s': %s" % (os.path.basename(source), destination, to_text(peek_result.get(u'msg', u''))))
return diff
if peek_result.get('rc', 0) == 0:
if peek_result.get('state') in (None, 'absent'):
diff['before'] = u''
elif peek_result.get('appears_binary'):
diff['dst_binary'] = 1
elif peek_result.get('size') and C.MAX_FILE_SIZE_FOR_DIFF > 0 and peek_result['size'] > C.MAX_FILE_SIZE_FOR_DIFF:
diff['dst_larger'] = C.MAX_FILE_SIZE_FOR_DIFF
else:
display.debug(u"Slurping the file %s" % source)
dest_result = self._execute_module(module_name='slurp', module_args=dict(path=destination), task_vars=task_vars, persist_files=True)
if 'content' in dest_result:
dest_contents = dest_result['content']
if dest_result['encoding'] == u'base64':
dest_contents = base64.b64decode(dest_contents)
else:
raise AnsibleError("unknown encoding in content option, failed: %s" % to_native(dest_result))
diff['before_header'] = destination
diff['before'] = to_text(dest_contents)
if source_file:
st = os.stat(source)
if C.MAX_FILE_SIZE_FOR_DIFF > 0 and st[stat.ST_SIZE] > C.MAX_FILE_SIZE_FOR_DIFF:
diff['src_larger'] = C.MAX_FILE_SIZE_FOR_DIFF
else:
display.debug("Reading local copy of the file %s" % source)
try:
with open(source, 'rb') as src:
src_contents = src.read()
except Exception as e:
raise AnsibleError("Unexpected error while reading source (%s) for diff: %s " % (source, to_native(e)))
if b"\x00" in src_contents:
diff['src_binary'] = 1
else:
diff['after_header'] = source
diff['after'] = to_text(src_contents)
else:
display.debug(u"source of file passed in")
diff['after_header'] = u'dynamically generated'
diff['after'] = source
if self._play_context.no_log:
if 'before' in diff:
diff["before"] = u""
if 'after' in diff:
diff["after"] = u" [[ Diff output has been hidden because 'no_log: true' was specified for this result ]]\n"
return diff
def _find_needle(self, dirname, needle):
'''
find a needle in haystack of paths, optionally using 'dirname' as a subdir.
This will build the ordered list of paths to search and pass them to dwim
to get back the first existing file found.
'''
# dwim already deals with playbook basedirs
path_stack = self._task.get_search_path()
# if missing it will return a file not found exception
return self._loader.path_dwim_relative_stack(path_stack, dirname, needle)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,606 |
Discovered interpreter path not used on delegated hosts
|
##### SUMMARY
Delegating a task to another host that uses a different Python interpreter always fails. This was reported previously (#61002, #63180), but the fix (#64906) does not seem to have resolved the issue.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
discovered_interpreter_python
##### ANSIBLE VERSION
```
ansible 2.9.9
config file = None
configured module search path = ['/home/dhatch/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/dhatch/.local/lib/python3.8/site-packages/ansible
executable location = /home/dhatch/.local/bin/ansible
python version = 3.8.2 (default, Feb 28 2020, 00:00:00) [GCC 10.0.1 20200216 (Red Hat 10.0.1-0.8)]
```
##### CONFIGURATION
```
```
##### OS / ENVIRONMENT
Control machine: Ansible 2.9.9 on Fedora 32
Host A: CentOS 8
Host B (delegation target): CentOS 7
##### STEPS TO REPRODUCE
This playbook shows that delegating a task to the CentOS 7 machine does not work when the task host is CentOS 8:
```yaml
- hosts: all
tasks:
- meta: clear_facts
- hosts: all
gather_facts: true
tasks:
- debug:
var: discovered_interpreter_python
- command: 'true'
- hosts: centos-8-host
tasks:
- command: 'true'
delegate_to: centos-7-host
ignore_errors: true
- hosts: centos-8-host
tasks:
- command: 'true'
delegate_to: centos-7-host
delegate_facts: false
ignore_errors: true
```
##### EXPECTED RESULTS
The Python interpreter used on the delegation target should be the one discovered for that host, NOT the host of the task itself.
##### ACTUAL RESULTS
```paste below
PLAY [all] *************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-7-host]
ok: [centos-8-host]
PLAY [all] *************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-7-host]
ok: [centos-8-host]
TASK [debug] ***********************************************************************************************************************************
ok: [centos-8-host] => {
"discovered_interpreter_python": "/usr/libexec/platform-python"
}
ok: [centos-7-host] => {
"discovered_interpreter_python": "/usr/bin/python"
}
TASK [command] *********************************************************************************************************************************
changed: [centos-7-host]
changed: [centos-8-host]
PLAY [centos-8-host] *********************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-8-host]
TASK [command] *********************************************************************************************************************************
fatal: [centos-8-host -> centos-7-host]: FAILED! => {"changed": false, "module_stderr": "Shared connection to centos-7-host closed.\r\n", "module_stdout": "/bin/sh: /usr/libexec/platform-python: No such file or directory\r\n", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
...ignoring
PLAY [centos-8-host] *********************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-8-host]
TASK [command] *********************************************************************************************************************************
fatal: [centos-8-host -> centos-7-host]: FAILED! => {"changed": false, "module_stderr": "Shared connection to centos-7-host closed.\r\n", "module_stdout": "/bin/sh: /usr/libexec/platform-python: No such file or directory\r\n", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
...ignoring
PLAY RECAP *************************************************************************************************************************************
centos-8-host : ok=8 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=2
centos-7-host : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/69606
|
https://github.com/ansible/ansible/pull/69604
|
dc63b365011a583b9e9bcd60d1fad6fb10b553c7
|
de3f7c7739851852dec8ea99a76c353317270b70
| 2020-05-19T21:08:12Z |
python
| 2020-05-22T13:31:34Z |
test/integration/targets/delegate_to/inventory_interpreters
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,606 |
Discovered interpreter path not used on delegated hosts
|
##### SUMMARY
Delegating a task to another host that uses a different Python interpreter always fails. This was reported previously (#61002, #63180), but the fix (#64906) does not seem to have resolved the issue.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
discovered_interpreter_python
##### ANSIBLE VERSION
```
ansible 2.9.9
config file = None
configured module search path = ['/home/dhatch/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/dhatch/.local/lib/python3.8/site-packages/ansible
executable location = /home/dhatch/.local/bin/ansible
python version = 3.8.2 (default, Feb 28 2020, 00:00:00) [GCC 10.0.1 20200216 (Red Hat 10.0.1-0.8)]
```
##### CONFIGURATION
```
```
##### OS / ENVIRONMENT
Control machine: Ansible 2.9.9 on Fedora 32
Host A: CentOS 8
Host B (delegation target): CentOS 7
##### STEPS TO REPRODUCE
This playbook shows that delegating a task to the CentOS 7 machine does not work when the task host is CentOS 8:
```yaml
- hosts: all
tasks:
- meta: clear_facts
- hosts: all
gather_facts: true
tasks:
- debug:
var: discovered_interpreter_python
- command: 'true'
- hosts: centos-8-host
tasks:
- command: 'true'
delegate_to: centos-7-host
ignore_errors: true
- hosts: centos-8-host
tasks:
- command: 'true'
delegate_to: centos-7-host
delegate_facts: false
ignore_errors: true
```
##### EXPECTED RESULTS
The Python interpreter used on the delegation target should be the one discovered for that host, NOT the host of the task itself.
##### ACTUAL RESULTS
```paste below
PLAY [all] *************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-7-host]
ok: [centos-8-host]
PLAY [all] *************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-7-host]
ok: [centos-8-host]
TASK [debug] ***********************************************************************************************************************************
ok: [centos-8-host] => {
"discovered_interpreter_python": "/usr/libexec/platform-python"
}
ok: [centos-7-host] => {
"discovered_interpreter_python": "/usr/bin/python"
}
TASK [command] *********************************************************************************************************************************
changed: [centos-7-host]
changed: [centos-8-host]
PLAY [centos-8-host] *********************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-8-host]
TASK [command] *********************************************************************************************************************************
fatal: [centos-8-host -> centos-7-host]: FAILED! => {"changed": false, "module_stderr": "Shared connection to centos-7-host closed.\r\n", "module_stdout": "/bin/sh: /usr/libexec/platform-python: No such file or directory\r\n", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
...ignoring
PLAY [centos-8-host] *********************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-8-host]
TASK [command] *********************************************************************************************************************************
fatal: [centos-8-host -> centos-7-host]: FAILED! => {"changed": false, "module_stderr": "Shared connection to centos-7-host closed.\r\n", "module_stdout": "/bin/sh: /usr/libexec/platform-python: No such file or directory\r\n", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
...ignoring
PLAY RECAP *************************************************************************************************************************************
centos-8-host : ok=8 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=2
centos-7-host : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/69606
|
https://github.com/ansible/ansible/pull/69604
|
dc63b365011a583b9e9bcd60d1fad6fb10b553c7
|
de3f7c7739851852dec8ea99a76c353317270b70
| 2020-05-19T21:08:12Z |
python
| 2020-05-22T13:31:34Z |
test/integration/targets/delegate_to/library/detect_interpreter.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,606 |
Discovered interpreter path not used on delegated hosts
|
##### SUMMARY
Delegating a task to another host that uses a different Python interpreter always fails. This was reported previously (#61002, #63180), but the fix (#64906) does not seem to have resolved the issue.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
discovered_interpreter_python
##### ANSIBLE VERSION
```
ansible 2.9.9
config file = None
configured module search path = ['/home/dhatch/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/dhatch/.local/lib/python3.8/site-packages/ansible
executable location = /home/dhatch/.local/bin/ansible
python version = 3.8.2 (default, Feb 28 2020, 00:00:00) [GCC 10.0.1 20200216 (Red Hat 10.0.1-0.8)]
```
##### CONFIGURATION
```
```
##### OS / ENVIRONMENT
Control machine: Ansible 2.9.9 on Fedora 32
Host A: CentOS 8
Host B (delegation target): CentOS 7
##### STEPS TO REPRODUCE
This playbook shows that delegating a task to the CentOS 7 machine does not work when the task host is CentOS 8:
```yaml
- hosts: all
tasks:
- meta: clear_facts
- hosts: all
gather_facts: true
tasks:
- debug:
var: discovered_interpreter_python
- command: 'true'
- hosts: centos-8-host
tasks:
- command: 'true'
delegate_to: centos-7-host
ignore_errors: true
- hosts: centos-8-host
tasks:
- command: 'true'
delegate_to: centos-7-host
delegate_facts: false
ignore_errors: true
```
##### EXPECTED RESULTS
The Python interpreter used on the delegation target should be the one discovered for that host, NOT the host of the task itself.
##### ACTUAL RESULTS
```paste below
PLAY [all] *************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-7-host]
ok: [centos-8-host]
PLAY [all] *************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-7-host]
ok: [centos-8-host]
TASK [debug] ***********************************************************************************************************************************
ok: [centos-8-host] => {
"discovered_interpreter_python": "/usr/libexec/platform-python"
}
ok: [centos-7-host] => {
"discovered_interpreter_python": "/usr/bin/python"
}
TASK [command] *********************************************************************************************************************************
changed: [centos-7-host]
changed: [centos-8-host]
PLAY [centos-8-host] *********************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-8-host]
TASK [command] *********************************************************************************************************************************
fatal: [centos-8-host -> centos-7-host]: FAILED! => {"changed": false, "module_stderr": "Shared connection to centos-7-host closed.\r\n", "module_stdout": "/bin/sh: /usr/libexec/platform-python: No such file or directory\r\n", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
...ignoring
PLAY [centos-8-host] *********************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-8-host]
TASK [command] *********************************************************************************************************************************
fatal: [centos-8-host -> centos-7-host]: FAILED! => {"changed": false, "module_stderr": "Shared connection to centos-7-host closed.\r\n", "module_stdout": "/bin/sh: /usr/libexec/platform-python: No such file or directory\r\n", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
...ignoring
PLAY RECAP *************************************************************************************************************************************
centos-8-host : ok=8 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=2
centos-7-host : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/69606
|
https://github.com/ansible/ansible/pull/69604
|
dc63b365011a583b9e9bcd60d1fad6fb10b553c7
|
de3f7c7739851852dec8ea99a76c353317270b70
| 2020-05-19T21:08:12Z |
python
| 2020-05-22T13:31:34Z |
test/integration/targets/delegate_to/runme.sh
|
#!/usr/bin/env bash
set -eux
platform="$(uname)"
function setup() {
if [[ "${platform}" == "FreeBSD" ]] || [[ "${platform}" == "Darwin" ]]; then
ifconfig lo0
existing=$(ifconfig lo0 | grep '^[[:blank:]]inet 127\.0\.0\. ' || true)
echo "${existing}"
for i in 3 4 254; do
ip="127.0.0.${i}"
if [[ "${existing}" != *"${ip}"* ]]; then
ifconfig lo0 alias "${ip}" up
fi
done
ifconfig lo0
fi
}
function teardown() {
if [[ "${platform}" == "FreeBSD" ]] || [[ "${platform}" == "Darwin" ]]; then
for i in 3 4 254; do
ip="127.0.0.${i}"
if [[ "${existing}" != *"${ip}"* ]]; then
ifconfig lo0 -alias "${ip}"
fi
done
ifconfig lo0
fi
}
setup
trap teardown EXIT
ANSIBLE_SSH_ARGS='-C -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null' \
ANSIBLE_HOST_KEY_CHECKING=false ansible-playbook test_delegate_to.yml -i inventory -v "$@"
# this test is not doing what it says it does, also relies on var that should not be available
#ansible-playbook test_loop_control.yml -v "$@"
ansible-playbook test_delegate_to_loop_randomness.yml -v "$@"
ansible-playbook delegate_and_nolog.yml -i inventory -v "$@"
ansible-playbook delegate_facts_block.yml -i inventory -v "$@"
ansible-playbook test_delegate_to_loop_caching.yml -i inventory -v "$@"
# ensure we are using correct settings when delegating
ANSIBLE_TIMEOUT=3 ansible-playbook delegate_vars_hanldling.yml -i inventory -v "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,606 |
Discovered interpreter path not used on delegated hosts
|
##### SUMMARY
Delegating a task to another host that uses a different Python interpreter always fails. This was reported previously (#61002, #63180), but the fix (#64906) does not seem to have resolved the issue.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
discovered_interpreter_python
##### ANSIBLE VERSION
```
ansible 2.9.9
config file = None
configured module search path = ['/home/dhatch/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/dhatch/.local/lib/python3.8/site-packages/ansible
executable location = /home/dhatch/.local/bin/ansible
python version = 3.8.2 (default, Feb 28 2020, 00:00:00) [GCC 10.0.1 20200216 (Red Hat 10.0.1-0.8)]
```
##### CONFIGURATION
```
```
##### OS / ENVIRONMENT
Control machine: Ansible 2.9.9 on Fedora 32
Host A: CentOS 8
Host B (delegation target): CentOS 7
##### STEPS TO REPRODUCE
This playbook shows that delegating a task to the CentOS 7 machine does not work when the task host is CentOS 8:
```yaml
- hosts: all
tasks:
- meta: clear_facts
- hosts: all
gather_facts: true
tasks:
- debug:
var: discovered_interpreter_python
- command: 'true'
- hosts: centos-8-host
tasks:
- command: 'true'
delegate_to: centos-7-host
ignore_errors: true
- hosts: centos-8-host
tasks:
- command: 'true'
delegate_to: centos-7-host
delegate_facts: false
ignore_errors: true
```
##### EXPECTED RESULTS
The Python interpreter used on the delegation target should be the one discovered for that host, NOT the host of the task itself.
##### ACTUAL RESULTS
```paste below
PLAY [all] *************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-7-host]
ok: [centos-8-host]
PLAY [all] *************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-7-host]
ok: [centos-8-host]
TASK [debug] ***********************************************************************************************************************************
ok: [centos-8-host] => {
"discovered_interpreter_python": "/usr/libexec/platform-python"
}
ok: [centos-7-host] => {
"discovered_interpreter_python": "/usr/bin/python"
}
TASK [command] *********************************************************************************************************************************
changed: [centos-7-host]
changed: [centos-8-host]
PLAY [centos-8-host] *********************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-8-host]
TASK [command] *********************************************************************************************************************************
fatal: [centos-8-host -> centos-7-host]: FAILED! => {"changed": false, "module_stderr": "Shared connection to centos-7-host closed.\r\n", "module_stdout": "/bin/sh: /usr/libexec/platform-python: No such file or directory\r\n", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
...ignoring
PLAY [centos-8-host] *********************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-8-host]
TASK [command] *********************************************************************************************************************************
fatal: [centos-8-host -> centos-7-host]: FAILED! => {"changed": false, "module_stderr": "Shared connection to centos-7-host closed.\r\n", "module_stdout": "/bin/sh: /usr/libexec/platform-python: No such file or directory\r\n", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
...ignoring
PLAY RECAP *************************************************************************************************************************************
centos-8-host : ok=8 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=2
centos-7-host : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/69606
|
https://github.com/ansible/ansible/pull/69604
|
dc63b365011a583b9e9bcd60d1fad6fb10b553c7
|
de3f7c7739851852dec8ea99a76c353317270b70
| 2020-05-19T21:08:12Z |
python
| 2020-05-22T13:31:34Z |
test/integration/targets/delegate_to/verify_interpreter.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,606 |
Discovered interpreter path not used on delegated hosts
|
##### SUMMARY
Delegating a task to another host that uses a different Python interpreter always fails. This was reported previously (#61002, #63180), but the fix (#64906) does not seem to have resolved the issue.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
discovered_interpreter_python
##### ANSIBLE VERSION
```
ansible 2.9.9
config file = None
configured module search path = ['/home/dhatch/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/dhatch/.local/lib/python3.8/site-packages/ansible
executable location = /home/dhatch/.local/bin/ansible
python version = 3.8.2 (default, Feb 28 2020, 00:00:00) [GCC 10.0.1 20200216 (Red Hat 10.0.1-0.8)]
```
##### CONFIGURATION
```
```
##### OS / ENVIRONMENT
Control machine: Ansible 2.9.9 on Fedora 32
Host A: CentOS 8
Host B (delegation target): CentOS 7
##### STEPS TO REPRODUCE
This playbook shows that delegating a task to the CentOS 7 machine does not work when the task host is CentOS 8:
```yaml
- hosts: all
tasks:
- meta: clear_facts
- hosts: all
gather_facts: true
tasks:
- debug:
var: discovered_interpreter_python
- command: 'true'
- hosts: centos-8-host
tasks:
- command: 'true'
delegate_to: centos-7-host
ignore_errors: true
- hosts: centos-8-host
tasks:
- command: 'true'
delegate_to: centos-7-host
delegate_facts: false
ignore_errors: true
```
##### EXPECTED RESULTS
The Python interpreter used on the delegation target should be the one discovered for that host, NOT the host of the task itself.
##### ACTUAL RESULTS
```paste below
PLAY [all] *************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-7-host]
ok: [centos-8-host]
PLAY [all] *************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-7-host]
ok: [centos-8-host]
TASK [debug] ***********************************************************************************************************************************
ok: [centos-8-host] => {
"discovered_interpreter_python": "/usr/libexec/platform-python"
}
ok: [centos-7-host] => {
"discovered_interpreter_python": "/usr/bin/python"
}
TASK [command] *********************************************************************************************************************************
changed: [centos-7-host]
changed: [centos-8-host]
PLAY [centos-8-host] *********************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-8-host]
TASK [command] *********************************************************************************************************************************
fatal: [centos-8-host -> centos-7-host]: FAILED! => {"changed": false, "module_stderr": "Shared connection to centos-7-host closed.\r\n", "module_stdout": "/bin/sh: /usr/libexec/platform-python: No such file or directory\r\n", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
...ignoring
PLAY [centos-8-host] *********************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [centos-8-host]
TASK [command] *********************************************************************************************************************************
fatal: [centos-8-host -> centos-7-host]: FAILED! => {"changed": false, "module_stderr": "Shared connection to centos-7-host closed.\r\n", "module_stdout": "/bin/sh: /usr/libexec/platform-python: No such file or directory\r\n", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
...ignoring
PLAY RECAP *************************************************************************************************************************************
centos-8-host : ok=8 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=2
centos-7-host : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/69606
|
https://github.com/ansible/ansible/pull/69604
|
dc63b365011a583b9e9bcd60d1fad6fb10b553c7
|
de3f7c7739851852dec8ea99a76c353317270b70
| 2020-05-19T21:08:12Z |
python
| 2020-05-22T13:31:34Z |
test/units/plugins/action/test_action.py
|
# -*- coding: utf-8 -*-
# (c) 2015, Florian Apolloner <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import re
from ansible import constants as C
from units.compat import unittest
from units.compat.mock import patch, MagicMock, mock_open
from ansible.errors import AnsibleError
from ansible.module_utils.six import text_type
from ansible.module_utils.six.moves import shlex_quote, builtins
from ansible.module_utils._text import to_bytes
from ansible.playbook.play_context import PlayContext
from ansible.plugins.action import ActionBase
from ansible.template import Templar
from ansible.vars.clean import clean_facts
from units.mock.loader import DictDataLoader
python_module_replacers = br"""
#!/usr/bin/python
#ANSIBLE_VERSION = "<<ANSIBLE_VERSION>>"
#MODULE_COMPLEX_ARGS = "<<INCLUDE_ANSIBLE_MODULE_COMPLEX_ARGS>>"
#SELINUX_SPECIAL_FS="<<SELINUX_SPECIAL_FILESYSTEMS>>"
test = u'Toshio \u304f\u3089\u3068\u307f'
from ansible.module_utils.basic import *
"""
powershell_module_replacers = b"""
WINDOWS_ARGS = "<<INCLUDE_ANSIBLE_MODULE_JSON_ARGS>>"
# POWERSHELL_COMMON
"""
def _action_base():
fake_loader = DictDataLoader({
})
mock_module_loader = MagicMock()
mock_shared_loader_obj = MagicMock()
mock_shared_loader_obj.module_loader = mock_module_loader
mock_connection_loader = MagicMock()
mock_shared_loader_obj.connection_loader = mock_connection_loader
mock_connection = MagicMock()
play_context = MagicMock()
action_base = DerivedActionBase(task=None,
connection=mock_connection,
play_context=play_context,
loader=fake_loader,
templar=None,
shared_loader_obj=mock_shared_loader_obj)
return action_base
class DerivedActionBase(ActionBase):
TRANSFERS_FILES = False
def run(self, tmp=None, task_vars=None):
# We're not testing the plugin run() method, just the helper
# methods ActionBase defines
return super(DerivedActionBase, self).run(tmp=tmp, task_vars=task_vars)
class TestActionBase(unittest.TestCase):
def test_action_base_run(self):
mock_task = MagicMock()
mock_task.action = "foo"
mock_task.args = dict(a=1, b=2, c=3)
mock_connection = MagicMock()
play_context = PlayContext()
mock_task.async_val = None
action_base = DerivedActionBase(mock_task, mock_connection, play_context, None, None, None)
results = action_base.run()
self.assertEqual(results, dict())
mock_task.async_val = 0
action_base = DerivedActionBase(mock_task, mock_connection, play_context, None, None, None)
results = action_base.run()
self.assertEqual(results, {})
def test_action_base__configure_module(self):
fake_loader = DictDataLoader({
})
# create our fake task
mock_task = MagicMock()
mock_task.action = "copy"
mock_task.async_val = 0
# create a mock connection, so we don't actually try and connect to things
mock_connection = MagicMock()
# create a mock shared loader object
def mock_find_plugin(name, options, collection_list=None):
if name == 'badmodule':
return None
elif '.ps1' in options:
return '/fake/path/to/%s.ps1' % name
else:
return '/fake/path/to/%s' % name
mock_module_loader = MagicMock()
mock_module_loader.find_plugin.side_effect = mock_find_plugin
mock_shared_obj_loader = MagicMock()
mock_shared_obj_loader.module_loader = mock_module_loader
# we're using a real play context here
play_context = PlayContext()
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=fake_loader,
templar=Templar(loader=fake_loader),
shared_loader_obj=mock_shared_obj_loader,
)
# test python module formatting
with patch.object(builtins, 'open', mock_open(read_data=to_bytes(python_module_replacers.strip(), encoding='utf-8'))):
with patch.object(os, 'rename'):
mock_task.args = dict(a=1, foo='fö〩')
mock_connection.module_implementation_preferences = ('',)
(style, shebang, data, path) = action_base._configure_module(mock_task.action, mock_task.args,
task_vars=dict(ansible_python_interpreter='/usr/bin/python'))
self.assertEqual(style, "new")
self.assertEqual(shebang, u"#!/usr/bin/python")
# test module not found
self.assertRaises(AnsibleError, action_base._configure_module, 'badmodule', mock_task.args)
# test powershell module formatting
with patch.object(builtins, 'open', mock_open(read_data=to_bytes(powershell_module_replacers.strip(), encoding='utf-8'))):
mock_task.action = 'win_copy'
mock_task.args = dict(b=2)
mock_connection.module_implementation_preferences = ('.ps1',)
(style, shebang, data, path) = action_base._configure_module('stat', mock_task.args)
self.assertEqual(style, "new")
self.assertEqual(shebang, u'#!powershell')
# test module not found
self.assertRaises(AnsibleError, action_base._configure_module, 'badmodule', mock_task.args)
def test_action_base__compute_environment_string(self):
fake_loader = DictDataLoader({
})
# create our fake task
mock_task = MagicMock()
mock_task.action = "copy"
mock_task.args = dict(a=1)
# create a mock connection, so we don't actually try and connect to things
def env_prefix(**args):
return ' '.join(['%s=%s' % (k, shlex_quote(text_type(v))) for k, v in args.items()])
mock_connection = MagicMock()
mock_connection._shell.env_prefix.side_effect = env_prefix
# we're using a real play context here
play_context = PlayContext()
# and we're using a real templar here too
templar = Templar(loader=fake_loader)
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=fake_loader,
templar=templar,
shared_loader_obj=None,
)
# test standard environment setup
mock_task.environment = [dict(FOO='foo'), None]
env_string = action_base._compute_environment_string()
self.assertEqual(env_string, "FOO=foo")
# test where environment is not a list
mock_task.environment = dict(FOO='foo')
env_string = action_base._compute_environment_string()
self.assertEqual(env_string, "FOO=foo")
# test environment with a variable in it
templar.available_variables = dict(the_var='bar')
mock_task.environment = [dict(FOO='{{the_var}}')]
env_string = action_base._compute_environment_string()
self.assertEqual(env_string, "FOO=bar")
# test with a bad environment set
mock_task.environment = dict(FOO='foo')
mock_task.environment = ['hi there']
self.assertRaises(AnsibleError, action_base._compute_environment_string)
def test_action_base__early_needs_tmp_path(self):
# create our fake task
mock_task = MagicMock()
# create a mock connection, so we don't actually try and connect to things
mock_connection = MagicMock()
# we're using a real play context here
play_context = PlayContext()
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=None,
templar=None,
shared_loader_obj=None,
)
self.assertFalse(action_base._early_needs_tmp_path())
action_base.TRANSFERS_FILES = True
self.assertTrue(action_base._early_needs_tmp_path())
def test_action_base__make_tmp_path(self):
# create our fake task
mock_task = MagicMock()
def get_shell_opt(opt):
ret = None
if opt == 'admin_users':
ret = ['root', 'toor', 'Administrator']
elif opt == 'remote_tmp':
ret = '~/.ansible/tmp'
return ret
# create a mock connection, so we don't actually try and connect to things
mock_connection = MagicMock()
mock_connection.transport = 'ssh'
mock_connection._shell.mkdtemp.return_value = 'mkdir command'
mock_connection._shell.join_path.side_effect = os.path.join
mock_connection._shell.get_option = get_shell_opt
mock_connection._shell.HOMES_RE = re.compile(r'(\'|\")?(~|\$HOME)(.*)')
# we're using a real play context here
play_context = PlayContext()
play_context.become = True
play_context.become_user = 'foo'
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=None,
templar=None,
shared_loader_obj=None,
)
action_base._low_level_execute_command = MagicMock()
action_base._low_level_execute_command.return_value = dict(rc=0, stdout='/some/path')
self.assertEqual(action_base._make_tmp_path('root'), '/some/path/')
# empty path fails
action_base._low_level_execute_command.return_value = dict(rc=0, stdout='')
self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root')
# authentication failure
action_base._low_level_execute_command.return_value = dict(rc=5, stdout='')
self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root')
# ssh error
action_base._low_level_execute_command.return_value = dict(rc=255, stdout='', stderr='')
self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root')
play_context.verbosity = 5
self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root')
# general error
action_base._low_level_execute_command.return_value = dict(rc=1, stdout='some stuff here', stderr='')
self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root')
action_base._low_level_execute_command.return_value = dict(rc=1, stdout='some stuff here', stderr='No space left on device')
self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root')
def test_action_base__remove_tmp_path(self):
# create our fake task
mock_task = MagicMock()
# create a mock connection, so we don't actually try and connect to things
mock_connection = MagicMock()
mock_connection._shell.remove.return_value = 'rm some stuff'
# we're using a real play context here
play_context = PlayContext()
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=None,
templar=None,
shared_loader_obj=None,
)
action_base._low_level_execute_command = MagicMock()
# these don't really return anything or raise errors, so
# we're pretty much calling these for coverage right now
action_base._remove_tmp_path('/bad/path/dont/remove')
action_base._remove_tmp_path('/good/path/to/ansible-tmp-thing')
@patch('os.unlink')
@patch('os.fdopen')
@patch('tempfile.mkstemp')
def test_action_base__transfer_data(self, mock_mkstemp, mock_fdopen, mock_unlink):
# create our fake task
mock_task = MagicMock()
# create a mock connection, so we don't actually try and connect to things
mock_connection = MagicMock()
mock_connection.put_file.return_value = None
# we're using a real play context here
play_context = PlayContext()
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=None,
templar=None,
shared_loader_obj=None,
)
mock_afd = MagicMock()
mock_afile = MagicMock()
mock_mkstemp.return_value = (mock_afd, mock_afile)
mock_unlink.return_value = None
mock_afo = MagicMock()
mock_afo.write.return_value = None
mock_afo.flush.return_value = None
mock_afo.close.return_value = None
mock_fdopen.return_value = mock_afo
self.assertEqual(action_base._transfer_data('/path/to/remote/file', 'some data'), '/path/to/remote/file')
self.assertEqual(action_base._transfer_data('/path/to/remote/file', 'some mixed data: fö〩'), '/path/to/remote/file')
self.assertEqual(action_base._transfer_data('/path/to/remote/file', dict(some_key='some value')), '/path/to/remote/file')
self.assertEqual(action_base._transfer_data('/path/to/remote/file', dict(some_key='fö〩')), '/path/to/remote/file')
mock_afo.write.side_effect = Exception()
self.assertRaises(AnsibleError, action_base._transfer_data, '/path/to/remote/file', '')
def test_action_base__execute_remote_stat(self):
# create our fake task
mock_task = MagicMock()
# create a mock connection, so we don't actually try and connect to things
mock_connection = MagicMock()
# we're using a real play context here
play_context = PlayContext()
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=None,
templar=None,
shared_loader_obj=None,
)
action_base._execute_module = MagicMock()
# test normal case
action_base._execute_module.return_value = dict(stat=dict(checksum='1111111111111111111111111111111111', exists=True))
res = action_base._execute_remote_stat(path='/path/to/file', all_vars=dict(), follow=False)
self.assertEqual(res['checksum'], '1111111111111111111111111111111111')
# test does not exist
action_base._execute_module.return_value = dict(stat=dict(exists=False))
res = action_base._execute_remote_stat(path='/path/to/file', all_vars=dict(), follow=False)
self.assertFalse(res['exists'])
self.assertEqual(res['checksum'], '1')
# test no checksum in result from _execute_module
action_base._execute_module.return_value = dict(stat=dict(exists=True))
res = action_base._execute_remote_stat(path='/path/to/file', all_vars=dict(), follow=False)
self.assertTrue(res['exists'])
self.assertEqual(res['checksum'], '')
# test stat call failed
action_base._execute_module.return_value = dict(failed=True, msg="because I said so")
self.assertRaises(AnsibleError, action_base._execute_remote_stat, path='/path/to/file', all_vars=dict(), follow=False)
def test_action_base__execute_module(self):
# create our fake task
mock_task = MagicMock()
mock_task.action = 'copy'
mock_task.args = dict(a=1, b=2, c=3)
# create a mock connection, so we don't actually try and connect to things
def build_module_command(env_string, shebang, cmd, arg_path=None):
to_run = [env_string, cmd]
if arg_path:
to_run.append(arg_path)
return " ".join(to_run)
def get_option(option):
return {'admin_users': ['root', 'toor']}.get(option)
mock_connection = MagicMock()
mock_connection.build_module_command.side_effect = build_module_command
mock_connection.socket_path = None
mock_connection._shell.get_remote_filename.return_value = 'copy.py'
mock_connection._shell.join_path.side_effect = os.path.join
mock_connection._shell.tmpdir = '/var/tmp/mytempdir'
mock_connection._shell.get_option = get_option
# we're using a real play context here
play_context = PlayContext()
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=None,
templar=None,
shared_loader_obj=None,
)
# fake a lot of methods as we test those elsewhere
action_base._configure_module = MagicMock()
action_base._supports_check_mode = MagicMock()
action_base._is_pipelining_enabled = MagicMock()
action_base._make_tmp_path = MagicMock()
action_base._transfer_data = MagicMock()
action_base._compute_environment_string = MagicMock()
action_base._low_level_execute_command = MagicMock()
action_base._fixup_perms2 = MagicMock()
action_base._configure_module.return_value = ('new', '#!/usr/bin/python', 'this is the module data', 'path')
action_base._is_pipelining_enabled.return_value = False
action_base._compute_environment_string.return_value = ''
action_base._connection.has_pipelining = False
action_base._make_tmp_path.return_value = '/the/tmp/path'
action_base._low_level_execute_command.return_value = dict(stdout='{"rc": 0, "stdout": "ok"}')
self.assertEqual(action_base._execute_module(module_name=None, module_args=None), dict(_ansible_parsed=True, rc=0, stdout="ok", stdout_lines=['ok']))
self.assertEqual(
action_base._execute_module(
module_name='foo',
module_args=dict(z=9, y=8, x=7),
task_vars=dict(a=1)
),
dict(
_ansible_parsed=True,
rc=0,
stdout="ok",
stdout_lines=['ok'],
)
)
# test with needing/removing a remote tmp path
action_base._configure_module.return_value = ('old', '#!/usr/bin/python', 'this is the module data', 'path')
action_base._is_pipelining_enabled.return_value = False
action_base._make_tmp_path.return_value = '/the/tmp/path'
self.assertEqual(action_base._execute_module(), dict(_ansible_parsed=True, rc=0, stdout="ok", stdout_lines=['ok']))
action_base._configure_module.return_value = ('non_native_want_json', '#!/usr/bin/python', 'this is the module data', 'path')
self.assertEqual(action_base._execute_module(), dict(_ansible_parsed=True, rc=0, stdout="ok", stdout_lines=['ok']))
play_context.become = True
play_context.become_user = 'foo'
self.assertEqual(action_base._execute_module(), dict(_ansible_parsed=True, rc=0, stdout="ok", stdout_lines=['ok']))
# test an invalid shebang return
action_base._configure_module.return_value = ('new', '', 'this is the module data', 'path')
action_base._is_pipelining_enabled.return_value = False
action_base._make_tmp_path.return_value = '/the/tmp/path'
self.assertRaises(AnsibleError, action_base._execute_module)
# test with check mode enabled, once with support for check
# mode and once with support disabled to raise an error
play_context.check_mode = True
action_base._configure_module.return_value = ('new', '#!/usr/bin/python', 'this is the module data', 'path')
self.assertEqual(action_base._execute_module(), dict(_ansible_parsed=True, rc=0, stdout="ok", stdout_lines=['ok']))
action_base._supports_check_mode = False
self.assertRaises(AnsibleError, action_base._execute_module)
def test_action_base_sudo_only_if_user_differs(self):
fake_loader = MagicMock()
fake_loader.get_basedir.return_value = os.getcwd()
play_context = PlayContext()
action_base = DerivedActionBase(None, None, play_context, fake_loader, None, None)
action_base.get_become_option = MagicMock(return_value='root')
action_base._get_remote_user = MagicMock(return_value='root')
action_base._connection = MagicMock(exec_command=MagicMock(return_value=(0, '', '')))
action_base._connection._shell = shell = MagicMock(append_command=MagicMock(return_value=('JOINED CMD')))
action_base._connection.become = become = MagicMock()
become.build_become_command.return_value = 'foo'
action_base._low_level_execute_command('ECHO', sudoable=True)
become.build_become_command.assert_not_called()
action_base._get_remote_user.return_value = 'apo'
action_base._low_level_execute_command('ECHO', sudoable=True, executable='/bin/csh')
become.build_become_command.assert_called_once_with("ECHO", shell)
become.build_become_command.reset_mock()
with patch.object(C, 'BECOME_ALLOW_SAME_USER', new=True):
action_base._get_remote_user.return_value = 'root'
action_base._low_level_execute_command('ECHO SAME', sudoable=True)
become.build_become_command.assert_called_once_with("ECHO SAME", shell)
def test__remote_expand_user_relative_pathing(self):
action_base = _action_base()
action_base._play_context.remote_addr = 'bar'
action_base._low_level_execute_command = MagicMock(return_value={'stdout': b'../home/user'})
action_base._connection._shell.join_path.return_value = '../home/user/foo'
with self.assertRaises(AnsibleError) as cm:
action_base._remote_expand_user('~/foo')
self.assertEqual(
cm.exception.message,
"'bar' returned an invalid relative home directory path containing '..'"
)
class TestActionBaseCleanReturnedData(unittest.TestCase):
def test(self):
fake_loader = DictDataLoader({
})
mock_module_loader = MagicMock()
mock_shared_loader_obj = MagicMock()
mock_shared_loader_obj.module_loader = mock_module_loader
connection_loader_paths = ['/tmp/asdfadf', '/usr/lib64/whatever',
'dfadfasf',
'foo.py',
'.*',
# FIXME: a path with parans breaks the regex
# '(.*)',
'/path/to/ansible/lib/ansible/plugins/connection/custom_connection.py',
'/path/to/ansible/lib/ansible/plugins/connection/ssh.py']
def fake_all(path_only=None):
for path in connection_loader_paths:
yield path
mock_connection_loader = MagicMock()
mock_connection_loader.all = fake_all
mock_shared_loader_obj.connection_loader = mock_connection_loader
mock_connection = MagicMock()
# mock_connection._shell.env_prefix.side_effect = env_prefix
# action_base = DerivedActionBase(mock_task, mock_connection, play_context, None, None, None)
action_base = DerivedActionBase(task=None,
connection=mock_connection,
play_context=None,
loader=fake_loader,
templar=None,
shared_loader_obj=mock_shared_loader_obj)
data = {'ansible_playbook_python': '/usr/bin/python',
# 'ansible_rsync_path': '/usr/bin/rsync',
'ansible_python_interpreter': '/usr/bin/python',
'ansible_ssh_some_var': 'whatever',
'ansible_ssh_host_key_somehost': 'some key here',
'some_other_var': 'foo bar'}
data = clean_facts(data)
self.assertNotIn('ansible_playbook_python', data)
self.assertNotIn('ansible_python_interpreter', data)
self.assertIn('ansible_ssh_host_key_somehost', data)
self.assertIn('some_other_var', data)
class TestActionBaseParseReturnedData(unittest.TestCase):
def test_fail_no_json(self):
action_base = _action_base()
rc = 0
stdout = 'foo\nbar\n'
err = 'oopsy'
returned_data = {'rc': rc,
'stdout': stdout,
'stdout_lines': stdout.splitlines(),
'stderr': err}
res = action_base._parse_returned_data(returned_data)
self.assertFalse(res['_ansible_parsed'])
self.assertTrue(res['failed'])
self.assertEqual(res['module_stderr'], err)
def test_json_empty(self):
action_base = _action_base()
rc = 0
stdout = '{}\n'
err = ''
returned_data = {'rc': rc,
'stdout': stdout,
'stdout_lines': stdout.splitlines(),
'stderr': err}
res = action_base._parse_returned_data(returned_data)
del res['_ansible_parsed'] # we always have _ansible_parsed
self.assertEqual(len(res), 0)
self.assertFalse(res)
def test_json_facts(self):
action_base = _action_base()
rc = 0
stdout = '{"ansible_facts": {"foo": "bar", "ansible_blip": "blip_value"}}\n'
err = ''
returned_data = {'rc': rc,
'stdout': stdout,
'stdout_lines': stdout.splitlines(),
'stderr': err}
res = action_base._parse_returned_data(returned_data)
self.assertTrue(res['ansible_facts'])
self.assertIn('ansible_blip', res['ansible_facts'])
# TODO: Should this be an AnsibleUnsafe?
# self.assertIsInstance(res['ansible_facts'], AnsibleUnsafe)
def test_json_facts_add_host(self):
action_base = _action_base()
rc = 0
stdout = '''{"ansible_facts": {"foo": "bar", "ansible_blip": "blip_value"},
"add_host": {"host_vars": {"some_key": ["whatever the add_host object is"]}
}
}\n'''
err = ''
returned_data = {'rc': rc,
'stdout': stdout,
'stdout_lines': stdout.splitlines(),
'stderr': err}
res = action_base._parse_returned_data(returned_data)
self.assertTrue(res['ansible_facts'])
self.assertIn('ansible_blip', res['ansible_facts'])
self.assertIn('add_host', res)
# TODO: Should this be an AnsibleUnsafe?
# self.assertIsInstance(res['ansible_facts'], AnsibleUnsafe)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,574 |
Connection plugins broke in devel branch
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Podman connection plugin started to fail after May 14 2020
Seems like https://github.com/ansible/ansible/commit/2165f9ac40cf212891b11a75bd9b9b2f4f0b8dc3 broke it
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible-playbook 2.10.0.dev0
config file = /home/sshnaidm/.ansible.cfg
configured module search path = ['/home/sshnaidm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/sshnaidm/venvs/ansible-dev/lib/python3.7/site-packages/ansible
executable location = /home/sshnaidm/venvs/ansible-dev/bin/ansible-playbook
python version = 3.7.7 (default, Mar 13 2020, 10:23:39) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
commit hash: 01e7915b0a9778a934a0f0e9e9d110dbef7e31ec
```
##### STEPS TO REPRODUCE
```bash
# install podman
podman run -d --rm --name "podman-container" python:3-alpine sleep 1d
git clone https://github.com/containers/ansible-podman-collections
cd ansible-podman-collections/tests/integration/targets/connection_podman
# install collection
# run twice with 2.9 (it pass) and 2.10dev from devel (it fails)
ANSIBLE_DEBUG=1 ansible-playbook -vvvvv ../connection/test_connection.yml -i test_connection.inventory
-e target_hosts=podman -e action_prefix= -e local_tmp=/tmp/ansible-local -e remote_tmp=/tmp/ansible-remote
```
Testing playbook
```yaml
- hosts: "{{ target_hosts }}"
gather_facts: no
serial: 1
tasks:
### raw with unicode arg and output
- name: raw with unicode arg and output
raw: echo 汉语
register: command
- name: check output of raw with unicode arg and output
assert:
that:
- "'汉语' in command.stdout"
- command is changed # as of 2.2, raw should default to changed: true for consistency w/ shell/command/script modules
### copy local file with unicode filename and content
- name: create local file with unicode filename and content
local_action: lineinfile dest={{ local_tmp }}-汉语/汉语.txt create=true line=汉语
- name: remove remote file with unicode filename and content
action: "{{ action_prefix }}file path={{ remote_tmp }}-汉语/汉语.txt state=absent"
# [skip]
```
Inventory
```
[podman]
podman-container
[podman:vars]
ansible_host=podman-container
ansible_connection=containers.podman.podman
ansible_python_interpreter=/usr/local/bin/python
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
It pass on both 2.9 and 2.10
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Paste of passed result on 2.9: https://pastebin.com/cmbfEH1k
Paste of failed result on 2.10: https://pastebin.com/RER8SBm6
Diff between them, just for better visibility: https://linediff.com/?id=5ec170a6687f4bf1358b4567
!component =lib/ansible/executor/task_executor.py
This is the failed log:
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [create local file with unicode filename and content] ********************************************************************************************************************************************************
task path: /home/sshnaidm/sources/ansible-podman-collections/tests/integration/targets/connection/test_connection.yml:19
sending task start callback
entering _queue_task() for podman-container/lineinfile
Creating lock for lineinfile
worker is 1 (out of 1 available)
exiting _queue_task() for podman-container/lineinfile
running TaskExecutor() for podman-container/TASK: create local file with unicode filename and content
done queuing things up, now waiting for results queue to drain
waiting for pending results...
in run() - task 54e1addb-4632-6ef6-342a-00000000000a
variable 'ansible_connection' from source: group vars, precedence entry 'groups_inventory'
variable 'ansible_search_path' from source: unknown
variable '_ansible_loop_cache' from source: unknown
calling self._execute()
variable 'ansible_delegated_vars' from source: unknown
variable 'ansible_connection' from source: host vars for 'localhost'
no remote address found for delegated host localhost
using its name, so success depends on DNS resolution
variable 'omit' from source: magic vars
variable 'omit' from source: magic vars
variable 'omit' from source: magic vars
Loading FilterModule 'core' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/core.py (found_in_cache=True, class_only=False)
Loading FilterModule 'mathstuff' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/mathstuff.py (found_in_cache=True, class_only=False)
Loading FilterModule 'urls' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/urls.py (found_in_cache=True, class_only=False)
Loading FilterModule 'urlsplit' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/urlsplit.py (found_in_cache=True, class_only=False)
Loading TestModule 'core' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/test/core.py (found_in_cache=True, class_only=False)
Loading TestModule 'files' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/test/files.py (found_in_cache=True, class_only=False)
Loading TestModule 'mathstuff' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/test/mathstuff.py (found_in_cache=True, class_only=False)
variable 'local_tmp' from source: extra vars
variable 'ansible_delegated_vars' from source: unknown
variable 'ansible_connection' from source: host vars for 'localhost'
trying /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/connection
Loading Connection 'local' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/connection/local.py (found_in_cache=True, class_only=False)
trying /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/shell
Loading ShellModule 'sh' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/shell/sh.py (found_in_cache=True, class_only=False)
Loading ShellModule 'sh' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/shell/sh.py (found_in_cache=True, class_only=False)
variable 'ansible_delegated_vars' from source: unknown
Loading ActionModule 'normal' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/action/normal.py (searched paths: /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/action/__pycache__:/home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/action)
variable 'omit' from source: magic vars
starting attempt loop
running the handler
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c 'echo ~ && sleep 0'
<podman-container> ESTABLISH LOCAL CONNECTION FOR USER: sshnaidm
in local.exec_command()
<podman-container> EXEC /bin/sh -c 'echo ~ && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=/home/sshnaidm
, stderr=
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c '( umask 77 && mkdir -p "` echo /home/sshnaidm/.ansible/tmp `"&& mkdir /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 && echo ansible-tmp-1589733889.9242184-4036062-66574237396791="` echo /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 `" ) && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/sshnaidm/.ansible/tmp `"&& mkdir /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 && echo ansible-tmp-1589733889.9242184-4036062-66574237396791="` echo /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 `" ) && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=ansible-tmp-1589733889.9242184-4036062-66574237396791=/home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791
, stderr=
trying /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/modules
ANSIBALLZ: Using lock for lineinfile
ANSIBALLZ: Acquiring lock
ANSIBALLZ: Lock acquired: 139849027519056
ANSIBALLZ: Creating module
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/basic.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/_text.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/_json_compat.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/six/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/compat/selectors.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/text/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/validation.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/_utils.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/compat/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/warnings.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/file.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/parsing/convert_bool.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/_collections_compat.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/text/converters.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/process.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/sys_info.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/text/formatters.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/pycompat24.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/parsing/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/parameters.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/compat/_selectors2.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/collections.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/distro/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/distro/_distro.py
ANSIBALLZ: Writing module into payload
ANSIBALLZ: Writing module
ANSIBALLZ: Renaming module
ANSIBALLZ: Done creating module
variable 'ansible_python_interpreter' from source: group vars, precedence entry 'groups_inventory'
variable 'ansible_facts' from source: unknown
Using module file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/modules/lineinfile.py
transferring module to remote /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py
<podman-container> PUT /home/sshnaidm/.ansible/tmp/ansible-local-4035979hft6_xtg/tmp48y3pu6p TO /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py
done transferring module to remote
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c 'chmod u+x /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c 'chmod u+x /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=, stderr=
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c '/usr/local/bin/python /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c '/usr/local/bin/python /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=127, stdout=, stderr=/bin/sh: /usr/local/bin/python: No such file or directory
done with _execute_module (lineinfile, {'dest': '/tmp/ansible-local-汉语/汉语.txt', 'create': 'true', 'line': '汉语', '_ansible_check_mode': False, '_ansible_no_log': False, '_ansible_debug': True, '_ansible_diff': False, '_ansible_verbosity': 5, '_ansible_version': '2.10.0.dev0', '_ansible_module_name': 'lineinfile', '_ansible_syslog_facility': 'LOG_USER', '_ansible_selinux_special_fs': ['fuse', 'nfs', 'vboxsf', 'ramfs', '9p', 'vfat'], '_ansible_string_conversion_action': 'warn', '_ansible_socket': None, '_ansible_shell_executable': '/bin/sh', '_ansible_keep_remote_files': False, '_ansible_tmpdir': '/home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/', '_ansible_remote_tmp': '~/.ansible/tmp'})
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c 'rm -f -r /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ > /dev/null 2>&1 && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c 'rm -f -r /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ > /dev/null 2>&1 && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=, stderr=
handler run complete
attempt loop complete, returning result
_execute() done
dumping result to json
done dumping result, returning
done running TaskExecutor() for podman-container/TASK: create local file with unicode filename and content [54e1addb-4632-6ef6-342a-00000000000a]
sending task result for task 54e1addb-4632-6ef6-342a-00000000000a
done sending task result for task 54e1addb-4632-6ef6-342a-00000000000a
WORKER PROCESS EXITING
marking podman-container as failed
marking host podman-container failed, current state: HOST STATE: block=2, task=3, rescue=0, always=0, run_state=ITERATING_TASKS, fail_state=FAILED_NONE, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
^ failed state is now: HOST STATE: block=2, task=3, rescue=0, always=0, run_state=ITERATING_COMPLETE, fail_state=FAILED_TASKS, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
getting the next task for host podman-container
host podman-container is done iterating, returning
fatal: [podman-container]: FAILED! => {
"changed": false,
"rc": 127
}
MSG:
The module failed to execute correctly, you probably need to set the interpreter.
See stdout/stderr for the exact error
MODULE_STDERR:
/bin/sh: /usr/local/bin/python: No such file or directory
```
|
https://github.com/ansible/ansible/issues/69574
|
https://github.com/ansible/ansible/pull/69604
|
dc63b365011a583b9e9bcd60d1fad6fb10b553c7
|
de3f7c7739851852dec8ea99a76c353317270b70
| 2020-05-17T17:15:28Z |
python
| 2020-05-22T13:31:34Z |
changelogs/fragments/discovery_delegation_fix.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,574 |
Connection plugins broke in devel branch
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Podman connection plugin started to fail after May 14 2020
Seems like https://github.com/ansible/ansible/commit/2165f9ac40cf212891b11a75bd9b9b2f4f0b8dc3 broke it
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible-playbook 2.10.0.dev0
config file = /home/sshnaidm/.ansible.cfg
configured module search path = ['/home/sshnaidm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/sshnaidm/venvs/ansible-dev/lib/python3.7/site-packages/ansible
executable location = /home/sshnaidm/venvs/ansible-dev/bin/ansible-playbook
python version = 3.7.7 (default, Mar 13 2020, 10:23:39) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
commit hash: 01e7915b0a9778a934a0f0e9e9d110dbef7e31ec
```
##### STEPS TO REPRODUCE
```bash
# install podman
podman run -d --rm --name "podman-container" python:3-alpine sleep 1d
git clone https://github.com/containers/ansible-podman-collections
cd ansible-podman-collections/tests/integration/targets/connection_podman
# install collection
# run twice with 2.9 (it pass) and 2.10dev from devel (it fails)
ANSIBLE_DEBUG=1 ansible-playbook -vvvvv ../connection/test_connection.yml -i test_connection.inventory
-e target_hosts=podman -e action_prefix= -e local_tmp=/tmp/ansible-local -e remote_tmp=/tmp/ansible-remote
```
Testing playbook
```yaml
- hosts: "{{ target_hosts }}"
gather_facts: no
serial: 1
tasks:
### raw with unicode arg and output
- name: raw with unicode arg and output
raw: echo 汉语
register: command
- name: check output of raw with unicode arg and output
assert:
that:
- "'汉语' in command.stdout"
- command is changed # as of 2.2, raw should default to changed: true for consistency w/ shell/command/script modules
### copy local file with unicode filename and content
- name: create local file with unicode filename and content
local_action: lineinfile dest={{ local_tmp }}-汉语/汉语.txt create=true line=汉语
- name: remove remote file with unicode filename and content
action: "{{ action_prefix }}file path={{ remote_tmp }}-汉语/汉语.txt state=absent"
# [skip]
```
Inventory
```
[podman]
podman-container
[podman:vars]
ansible_host=podman-container
ansible_connection=containers.podman.podman
ansible_python_interpreter=/usr/local/bin/python
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
It pass on both 2.9 and 2.10
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Paste of passed result on 2.9: https://pastebin.com/cmbfEH1k
Paste of failed result on 2.10: https://pastebin.com/RER8SBm6
Diff between them, just for better visibility: https://linediff.com/?id=5ec170a6687f4bf1358b4567
!component =lib/ansible/executor/task_executor.py
This is the failed log:
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [create local file with unicode filename and content] ********************************************************************************************************************************************************
task path: /home/sshnaidm/sources/ansible-podman-collections/tests/integration/targets/connection/test_connection.yml:19
sending task start callback
entering _queue_task() for podman-container/lineinfile
Creating lock for lineinfile
worker is 1 (out of 1 available)
exiting _queue_task() for podman-container/lineinfile
running TaskExecutor() for podman-container/TASK: create local file with unicode filename and content
done queuing things up, now waiting for results queue to drain
waiting for pending results...
in run() - task 54e1addb-4632-6ef6-342a-00000000000a
variable 'ansible_connection' from source: group vars, precedence entry 'groups_inventory'
variable 'ansible_search_path' from source: unknown
variable '_ansible_loop_cache' from source: unknown
calling self._execute()
variable 'ansible_delegated_vars' from source: unknown
variable 'ansible_connection' from source: host vars for 'localhost'
no remote address found for delegated host localhost
using its name, so success depends on DNS resolution
variable 'omit' from source: magic vars
variable 'omit' from source: magic vars
variable 'omit' from source: magic vars
Loading FilterModule 'core' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/core.py (found_in_cache=True, class_only=False)
Loading FilterModule 'mathstuff' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/mathstuff.py (found_in_cache=True, class_only=False)
Loading FilterModule 'urls' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/urls.py (found_in_cache=True, class_only=False)
Loading FilterModule 'urlsplit' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/urlsplit.py (found_in_cache=True, class_only=False)
Loading TestModule 'core' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/test/core.py (found_in_cache=True, class_only=False)
Loading TestModule 'files' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/test/files.py (found_in_cache=True, class_only=False)
Loading TestModule 'mathstuff' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/test/mathstuff.py (found_in_cache=True, class_only=False)
variable 'local_tmp' from source: extra vars
variable 'ansible_delegated_vars' from source: unknown
variable 'ansible_connection' from source: host vars for 'localhost'
trying /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/connection
Loading Connection 'local' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/connection/local.py (found_in_cache=True, class_only=False)
trying /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/shell
Loading ShellModule 'sh' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/shell/sh.py (found_in_cache=True, class_only=False)
Loading ShellModule 'sh' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/shell/sh.py (found_in_cache=True, class_only=False)
variable 'ansible_delegated_vars' from source: unknown
Loading ActionModule 'normal' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/action/normal.py (searched paths: /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/action/__pycache__:/home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/action)
variable 'omit' from source: magic vars
starting attempt loop
running the handler
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c 'echo ~ && sleep 0'
<podman-container> ESTABLISH LOCAL CONNECTION FOR USER: sshnaidm
in local.exec_command()
<podman-container> EXEC /bin/sh -c 'echo ~ && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=/home/sshnaidm
, stderr=
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c '( umask 77 && mkdir -p "` echo /home/sshnaidm/.ansible/tmp `"&& mkdir /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 && echo ansible-tmp-1589733889.9242184-4036062-66574237396791="` echo /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 `" ) && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/sshnaidm/.ansible/tmp `"&& mkdir /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 && echo ansible-tmp-1589733889.9242184-4036062-66574237396791="` echo /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 `" ) && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=ansible-tmp-1589733889.9242184-4036062-66574237396791=/home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791
, stderr=
trying /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/modules
ANSIBALLZ: Using lock for lineinfile
ANSIBALLZ: Acquiring lock
ANSIBALLZ: Lock acquired: 139849027519056
ANSIBALLZ: Creating module
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/basic.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/_text.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/_json_compat.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/six/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/compat/selectors.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/text/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/validation.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/_utils.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/compat/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/warnings.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/file.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/parsing/convert_bool.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/_collections_compat.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/text/converters.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/process.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/sys_info.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/text/formatters.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/pycompat24.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/parsing/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/parameters.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/compat/_selectors2.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/collections.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/distro/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/distro/_distro.py
ANSIBALLZ: Writing module into payload
ANSIBALLZ: Writing module
ANSIBALLZ: Renaming module
ANSIBALLZ: Done creating module
variable 'ansible_python_interpreter' from source: group vars, precedence entry 'groups_inventory'
variable 'ansible_facts' from source: unknown
Using module file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/modules/lineinfile.py
transferring module to remote /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py
<podman-container> PUT /home/sshnaidm/.ansible/tmp/ansible-local-4035979hft6_xtg/tmp48y3pu6p TO /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py
done transferring module to remote
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c 'chmod u+x /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c 'chmod u+x /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=, stderr=
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c '/usr/local/bin/python /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c '/usr/local/bin/python /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=127, stdout=, stderr=/bin/sh: /usr/local/bin/python: No such file or directory
done with _execute_module (lineinfile, {'dest': '/tmp/ansible-local-汉语/汉语.txt', 'create': 'true', 'line': '汉语', '_ansible_check_mode': False, '_ansible_no_log': False, '_ansible_debug': True, '_ansible_diff': False, '_ansible_verbosity': 5, '_ansible_version': '2.10.0.dev0', '_ansible_module_name': 'lineinfile', '_ansible_syslog_facility': 'LOG_USER', '_ansible_selinux_special_fs': ['fuse', 'nfs', 'vboxsf', 'ramfs', '9p', 'vfat'], '_ansible_string_conversion_action': 'warn', '_ansible_socket': None, '_ansible_shell_executable': '/bin/sh', '_ansible_keep_remote_files': False, '_ansible_tmpdir': '/home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/', '_ansible_remote_tmp': '~/.ansible/tmp'})
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c 'rm -f -r /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ > /dev/null 2>&1 && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c 'rm -f -r /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ > /dev/null 2>&1 && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=, stderr=
handler run complete
attempt loop complete, returning result
_execute() done
dumping result to json
done dumping result, returning
done running TaskExecutor() for podman-container/TASK: create local file with unicode filename and content [54e1addb-4632-6ef6-342a-00000000000a]
sending task result for task 54e1addb-4632-6ef6-342a-00000000000a
done sending task result for task 54e1addb-4632-6ef6-342a-00000000000a
WORKER PROCESS EXITING
marking podman-container as failed
marking host podman-container failed, current state: HOST STATE: block=2, task=3, rescue=0, always=0, run_state=ITERATING_TASKS, fail_state=FAILED_NONE, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
^ failed state is now: HOST STATE: block=2, task=3, rescue=0, always=0, run_state=ITERATING_COMPLETE, fail_state=FAILED_TASKS, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
getting the next task for host podman-container
host podman-container is done iterating, returning
fatal: [podman-container]: FAILED! => {
"changed": false,
"rc": 127
}
MSG:
The module failed to execute correctly, you probably need to set the interpreter.
See stdout/stderr for the exact error
MODULE_STDERR:
/bin/sh: /usr/local/bin/python: No such file or directory
```
|
https://github.com/ansible/ansible/issues/69574
|
https://github.com/ansible/ansible/pull/69604
|
dc63b365011a583b9e9bcd60d1fad6fb10b553c7
|
de3f7c7739851852dec8ea99a76c353317270b70
| 2020-05-17T17:15:28Z |
python
| 2020-05-22T13:31:34Z |
lib/ansible/plugins/action/__init__.py
|
# coding: utf-8
# Copyright: (c) 2012-2014, Michael DeHaan <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import base64
import json
import os
import random
import re
import stat
import tempfile
import time
from abc import ABCMeta, abstractmethod
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleConnectionFailure, AnsibleActionSkip, AnsibleActionFail
from ansible.executor.module_common import modify_module
from ansible.executor.interpreter_discovery import discover_interpreter, InterpreterDiscoveryRequiredError
from ansible.module_utils.common._collections_compat import Sequence
from ansible.module_utils.json_utils import _filter_non_json_lines
from ansible.module_utils.six import binary_type, string_types, text_type, iteritems, with_metaclass
from ansible.module_utils.six.moves import shlex_quote
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.parsing.utils.jsonify import jsonify
from ansible.release import __version__
from ansible.utils.collection_loader import resource_from_fqcr
from ansible.utils.display import Display
from ansible.utils.unsafe_proxy import wrap_var, AnsibleUnsafeText
from ansible.vars.clean import remove_internal_keys
display = Display()
class ActionBase(with_metaclass(ABCMeta, object)):
'''
This class is the base class for all action plugins, and defines
code common to all actions. The base class handles the connection
by putting/getting files and executing commands based on the current
action in use.
'''
# A set of valid arguments
_VALID_ARGS = frozenset([])
def __init__(self, task, connection, play_context, loader, templar, shared_loader_obj):
self._task = task
self._connection = connection
self._play_context = play_context
self._loader = loader
self._templar = templar
self._shared_loader_obj = shared_loader_obj
self._cleanup_remote_tmp = False
self._supports_check_mode = True
self._supports_async = False
# interpreter discovery state
self._discovered_interpreter_key = None
self._discovered_interpreter = False
self._discovery_deprecation_warnings = []
self._discovery_warnings = []
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
self._used_interpreter = None
@abstractmethod
def run(self, tmp=None, task_vars=None):
""" Action Plugins should implement this method to perform their
tasks. Everything else in this base class is a helper method for the
action plugin to do that.
:kwarg tmp: Deprecated parameter. This is no longer used. An action plugin that calls
another one and wants to use the same remote tmp for both should set
self._connection._shell.tmpdir rather than this parameter.
:kwarg task_vars: The variables (host vars, group vars, config vars,
etc) associated with this task.
:returns: dictionary of results from the module
Implementors of action modules may find the following variables especially useful:
* Module parameters. These are stored in self._task.args
"""
result = {}
if tmp is not None:
result['warning'] = ['ActionModule.run() no longer honors the tmp parameter. Action'
' plugins should set self._connection._shell.tmpdir to share'
' the tmpdir']
del tmp
if self._task.async_val and not self._supports_async:
raise AnsibleActionFail('async is not supported for this task.')
elif self._play_context.check_mode and not self._supports_check_mode:
raise AnsibleActionSkip('check mode is not supported for this task.')
elif self._task.async_val and self._play_context.check_mode:
raise AnsibleActionFail('check mode and async cannot be used on same task.')
# Error if invalid argument is passed
if self._VALID_ARGS:
task_opts = frozenset(self._task.args.keys())
bad_opts = task_opts.difference(self._VALID_ARGS)
if bad_opts:
raise AnsibleActionFail('Invalid options for %s: %s' % (self._task.action, ','.join(list(bad_opts))))
if self._connection._shell.tmpdir is None and self._early_needs_tmp_path():
self._make_tmp_path()
return result
def cleanup(self, force=False):
"""Method to perform a clean up at the end of an action plugin execution
By default this is designed to clean up the shell tmpdir, and is toggled based on whether
async is in use
Action plugins may override this if they deem necessary, but should still call this method
via super
"""
if force or not self._task.async_val:
self._remove_tmp_path(self._connection._shell.tmpdir)
def get_plugin_option(self, plugin, option, default=None):
"""Helper to get an option from a plugin without having to use
the try/except dance everywhere to set a default
"""
try:
return plugin.get_option(option)
except (AttributeError, KeyError):
return default
def get_become_option(self, option, default=None):
return self.get_plugin_option(self._connection.become, option, default=default)
def get_connection_option(self, option, default=None):
return self.get_plugin_option(self._connection, option, default=default)
def get_shell_option(self, option, default=None):
return self.get_plugin_option(self._connection._shell, option, default=default)
def _remote_file_exists(self, path):
cmd = self._connection._shell.exists(path)
result = self._low_level_execute_command(cmd=cmd, sudoable=True)
if result['rc'] == 0:
return True
return False
def _configure_module(self, module_name, module_args, task_vars=None):
'''
Handles the loading and templating of the module code through the
modify_module() function.
'''
if task_vars is None:
task_vars = dict()
# Search module path(s) for named module.
for mod_type in self._connection.module_implementation_preferences:
# Check to determine if PowerShell modules are supported, and apply
# some fixes (hacks) to module name + args.
if mod_type == '.ps1':
# FIXME: This should be temporary and moved to an exec subsystem plugin where we can define the mapping
# for each subsystem.
win_collection = 'ansible.windows'
# async_status, win_stat, win_file, win_copy, and win_ping are not just like their
# python counterparts but they are compatible enough for our
# internal usage
if module_name in ('stat', 'file', 'copy', 'ping') and self._task.action != module_name:
module_name = '%s.win_%s' % (win_collection, module_name)
elif module_name in ['async_status']:
module_name = '%s.%s' % (win_collection, module_name)
# Remove extra quotes surrounding path parameters before sending to module.
if resource_from_fqcr(module_name) in ['win_stat', 'win_file', 'win_copy', 'slurp'] and module_args and \
hasattr(self._connection._shell, '_unquote'):
for key in ('src', 'dest', 'path'):
if key in module_args:
module_args[key] = self._connection._shell._unquote(module_args[key])
module_path = self._shared_loader_obj.module_loader.find_plugin(module_name, mod_type, collection_list=self._task.collections)
if module_path:
break
else: # This is a for-else: http://bit.ly/1ElPkyg
raise AnsibleError("The module %s was not found in configured module paths" % (module_name))
# insert shared code and arguments into the module
final_environment = dict()
self._compute_environment_string(final_environment)
become_kwargs = {}
if self._connection.become:
become_kwargs['become'] = True
become_kwargs['become_method'] = self._connection.become.name
become_kwargs['become_user'] = self._connection.become.get_option('become_user',
playcontext=self._play_context)
become_kwargs['become_password'] = self._connection.become.get_option('become_pass',
playcontext=self._play_context)
become_kwargs['become_flags'] = self._connection.become.get_option('become_flags',
playcontext=self._play_context)
# modify_module will exit early if interpreter discovery is required; re-run after if necessary
for dummy in (1, 2):
try:
(module_data, module_style, module_shebang) = modify_module(module_name, module_path, module_args, self._templar,
task_vars=task_vars,
module_compression=self._play_context.module_compression,
async_timeout=self._task.async_val,
environment=final_environment,
**become_kwargs)
break
except InterpreterDiscoveryRequiredError as idre:
self._discovered_interpreter = AnsibleUnsafeText(discover_interpreter(
action=self,
interpreter_name=idre.interpreter_name,
discovery_mode=idre.discovery_mode,
task_vars=task_vars))
# update the local task_vars with the discovered interpreter (which might be None);
# we'll propagate back to the controller in the task result
discovered_key = 'discovered_interpreter_%s' % idre.interpreter_name
# store in local task_vars facts collection for the retry and any other usages in this worker
if task_vars.get('ansible_facts') is None:
task_vars['ansible_facts'] = {}
task_vars['ansible_facts'][discovered_key] = self._discovered_interpreter
# preserve this so _execute_module can propagate back to controller as a fact
self._discovered_interpreter_key = discovered_key
return (module_style, module_shebang, module_data, module_path)
def _compute_environment_string(self, raw_environment_out=None):
'''
Builds the environment string to be used when executing the remote task.
'''
final_environment = dict()
if self._task.environment is not None:
environments = self._task.environment
if not isinstance(environments, list):
environments = [environments]
# The order of environments matters to make sure we merge
# in the parent's values first so those in the block then
# task 'win' in precedence
for environment in environments:
if environment is None or len(environment) == 0:
continue
temp_environment = self._templar.template(environment)
if not isinstance(temp_environment, dict):
raise AnsibleError("environment must be a dictionary, received %s (%s)" % (temp_environment, type(temp_environment)))
# very deliberately using update here instead of combine_vars, as
# these environment settings should not need to merge sub-dicts
final_environment.update(temp_environment)
if len(final_environment) > 0:
final_environment = self._templar.template(final_environment)
if isinstance(raw_environment_out, dict):
raw_environment_out.clear()
raw_environment_out.update(final_environment)
return self._connection._shell.env_prefix(**final_environment)
def _early_needs_tmp_path(self):
'''
Determines if a tmp path should be created before the action is executed.
'''
return getattr(self, 'TRANSFERS_FILES', False)
def _is_pipelining_enabled(self, module_style, wrap_async=False):
'''
Determines if we are required and can do pipelining
'''
# any of these require a true
for condition in [
self._connection.has_pipelining,
self._play_context.pipelining or self._connection.always_pipeline_modules, # pipelining enabled for play or connection requires it (eg winrm)
module_style == "new", # old style modules do not support pipelining
not C.DEFAULT_KEEP_REMOTE_FILES, # user wants remote files
not wrap_async or self._connection.always_pipeline_modules, # async does not normally support pipelining unless it does (eg winrm)
(self._connection.become.name if self._connection.become else '') != 'su', # su does not work with pipelining,
# FIXME: we might need to make become_method exclusion a configurable list
]:
if not condition:
return False
return True
def _get_admin_users(self):
'''
Returns a list of admin users that are configured for the current shell
plugin
'''
return self.get_shell_option('admin_users', ['root'])
def _get_remote_user(self):
''' consistently get the 'remote_user' for the action plugin '''
# TODO: use 'current user running ansible' as fallback when moving away from play_context
# pwd.getpwuid(os.getuid()).pw_name
remote_user = None
try:
remote_user = self._connection.get_option('remote_user')
except KeyError:
# plugin does not have remote_user option, fallback to default and/play_context
remote_user = getattr(self._connection, 'default_user', None) or self._play_context.remote_user
except AttributeError:
# plugin does not use config system, fallback to old play_context
remote_user = self._play_context.remote_user
return remote_user
def _is_become_unprivileged(self):
'''
The user is not the same as the connection user and is not part of the
shell configured admin users
'''
# if we don't use become then we know we aren't switching to a
# different unprivileged user
if not self._connection.become:
return False
# if we use become and the user is not an admin (or same user) then
# we need to return become_unprivileged as True
admin_users = self._get_admin_users()
remote_user = self._get_remote_user()
become_user = self.get_become_option('become_user')
return bool(become_user and become_user not in admin_users + [remote_user])
def _make_tmp_path(self, remote_user=None):
'''
Create and return a temporary path on a remote box.
'''
# Network connection plugins (network_cli, netconf, etc.) execute on the controller, rather than the remote host.
# As such, we want to avoid using remote_user for paths as remote_user may not line up with the local user
# This is a hack and should be solved by more intelligent handling of remote_tmp in 2.7
if getattr(self._connection, '_remote_is_local', False):
tmpdir = C.DEFAULT_LOCAL_TMP
else:
# NOTE: shell plugins should populate this setting anyways, but they dont do remote expansion, which
# we need for 'non posix' systems like cloud-init and solaris
tmpdir = self._remote_expand_user(self.get_shell_option('remote_tmp', default='~/.ansible/tmp'), sudoable=False)
become_unprivileged = self._is_become_unprivileged()
basefile = self._connection._shell._generate_temp_dir_name()
cmd = self._connection._shell.mkdtemp(basefile=basefile, system=become_unprivileged, tmpdir=tmpdir)
result = self._low_level_execute_command(cmd, sudoable=False)
# error handling on this seems a little aggressive?
if result['rc'] != 0:
if result['rc'] == 5:
output = 'Authentication failure.'
elif result['rc'] == 255 and self._connection.transport in ('ssh',):
if self._play_context.verbosity > 3:
output = u'SSH encountered an unknown error. The output was:\n%s%s' % (result['stdout'], result['stderr'])
else:
output = (u'SSH encountered an unknown error during the connection. '
'We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue')
elif u'No space left on device' in result['stderr']:
output = result['stderr']
else:
output = ('Failed to create temporary directory.'
'In some cases, you may have been able to authenticate and did not have permissions on the target directory. '
'Consider changing the remote tmp path in ansible.cfg to a path rooted in "/tmp", for more error information use -vvv. '
'Failed command was: %s, exited with result %d' % (cmd, result['rc']))
if 'stdout' in result and result['stdout'] != u'':
output = output + u", stdout output: %s" % result['stdout']
if self._play_context.verbosity > 3 and 'stderr' in result and result['stderr'] != u'':
output += u", stderr output: %s" % result['stderr']
raise AnsibleConnectionFailure(output)
else:
self._cleanup_remote_tmp = True
try:
stdout_parts = result['stdout'].strip().split('%s=' % basefile, 1)
rc = self._connection._shell.join_path(stdout_parts[-1], u'').splitlines()[-1]
except IndexError:
# stdout was empty or just space, set to / to trigger error in next if
rc = '/'
# Catch failure conditions, files should never be
# written to locations in /.
if rc == '/':
raise AnsibleError('failed to resolve remote temporary directory from %s: `%s` returned empty string' % (basefile, cmd))
self._connection._shell.tmpdir = rc
return rc
def _should_remove_tmp_path(self, tmp_path):
'''Determine if temporary path should be deleted or kept by user request/config'''
return tmp_path and self._cleanup_remote_tmp and not C.DEFAULT_KEEP_REMOTE_FILES and "-tmp-" in tmp_path
def _remove_tmp_path(self, tmp_path):
'''Remove a temporary path we created. '''
if tmp_path is None and self._connection._shell.tmpdir:
tmp_path = self._connection._shell.tmpdir
if self._should_remove_tmp_path(tmp_path):
cmd = self._connection._shell.remove(tmp_path, recurse=True)
# If we have gotten here we have a working ssh configuration.
# If ssh breaks we could leave tmp directories out on the remote system.
tmp_rm_res = self._low_level_execute_command(cmd, sudoable=False)
if tmp_rm_res.get('rc', 0) != 0:
display.warning('Error deleting remote temporary files (rc: %s, stderr: %s})'
% (tmp_rm_res.get('rc'), tmp_rm_res.get('stderr', 'No error string available.')))
else:
self._connection._shell.tmpdir = None
def _transfer_file(self, local_path, remote_path):
"""
Copy a file from the controller to a remote path
:arg local_path: Path on controller to transfer
:arg remote_path: Path on the remote system to transfer into
.. warning::
* When you use this function you likely want to use use fixup_perms2() on the
remote_path to make sure that the remote file is readable when the user becomes
a non-privileged user.
* If you use fixup_perms2() on the file and copy or move the file into place, you will
need to then remove filesystem acls on the file once it has been copied into place by
the module. See how the copy module implements this for help.
"""
self._connection.put_file(local_path, remote_path)
return remote_path
def _transfer_data(self, remote_path, data):
'''
Copies the module data out to the temporary module path.
'''
if isinstance(data, dict):
data = jsonify(data)
afd, afile = tempfile.mkstemp(dir=C.DEFAULT_LOCAL_TMP)
afo = os.fdopen(afd, 'wb')
try:
data = to_bytes(data, errors='surrogate_or_strict')
afo.write(data)
except Exception as e:
raise AnsibleError("failure writing module data to temporary file for transfer: %s" % to_native(e))
afo.flush()
afo.close()
try:
self._transfer_file(afile, remote_path)
finally:
os.unlink(afile)
return remote_path
def _fixup_perms2(self, remote_paths, remote_user=None, execute=True):
"""
We need the files we upload to be readable (and sometimes executable)
by the user being sudo'd to but we want to limit other people's access
(because the files could contain passwords or other private
information. We achieve this in one of these ways:
* If no sudo is performed or the remote_user is sudo'ing to
themselves, we don't have to change permissions.
* If the remote_user sudo's to a privileged user (for instance, root),
we don't have to change permissions
* If the remote_user sudo's to an unprivileged user then we attempt to
grant the unprivileged user access via file system acls.
* If granting file system acls fails we try to change the owner of the
file with chown which only works in case the remote_user is
privileged or the remote systems allows chown calls by unprivileged
users (e.g. HP-UX)
* If the chown fails we can set the file to be world readable so that
the second unprivileged user can read the file.
Since this could allow other users to get access to private
information we only do this if ansible is configured with
"allow_world_readable_tmpfiles" in the ansible.cfg
"""
if remote_user is None:
remote_user = self._get_remote_user()
if getattr(self._connection._shell, "_IS_WINDOWS", False):
# This won't work on Powershell as-is, so we'll just completely skip until
# we have a need for it, at which point we'll have to do something different.
return remote_paths
if self._is_become_unprivileged():
# Unprivileged user that's different than the ssh user. Let's get
# to work!
# Try to use file system acls to make the files readable for sudo'd
# user
if execute:
chmod_mode = 'rx'
setfacl_mode = 'r-x'
else:
chmod_mode = 'rX'
# NOTE: this form fails silently on freebsd. We currently
# never call _fixup_perms2() with execute=False but if we
# start to we'll have to fix this.
setfacl_mode = 'r-X'
res = self._remote_set_user_facl(remote_paths, self.get_become_option('become_user'), setfacl_mode)
if res['rc'] != 0:
# File system acls failed; let's try to use chown next
# Set executable bit first as on some systems an
# unprivileged user can use chown
if execute:
res = self._remote_chmod(remote_paths, 'u+x')
if res['rc'] != 0:
raise AnsibleError('Failed to set file mode on remote temporary files (rc: {0}, err: {1})'.format(res['rc'], to_native(res['stderr'])))
res = self._remote_chown(remote_paths, self.get_become_option('become_user'))
if res['rc'] != 0 and remote_user in self._get_admin_users():
# chown failed even if remote_user is administrator/root
raise AnsibleError('Failed to change ownership of the temporary files Ansible needs to create despite connecting as a privileged user. '
'Unprivileged become user would be unable to read the file.')
elif res['rc'] != 0:
if C.ALLOW_WORLD_READABLE_TMPFILES:
# chown and fs acls failed -- do things this insecure
# way only if the user opted in in the config file
display.warning('Using world-readable permissions for temporary files Ansible needs to create when becoming an unprivileged user. '
'This may be insecure. For information on securing this, see '
'https://docs.ansible.com/ansible/user_guide/become.html#risks-of-becoming-an-unprivileged-user')
res = self._remote_chmod(remote_paths, 'a+%s' % chmod_mode)
if res['rc'] != 0:
raise AnsibleError('Failed to set file mode on remote files (rc: {0}, err: {1})'.format(res['rc'], to_native(res['stderr'])))
else:
raise AnsibleError('Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user '
'(rc: %s, err: %s}). For information on working around this, see '
'https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user'
% (res['rc'], to_native(res['stderr'])))
elif execute:
# Can't depend on the file being transferred with execute permissions.
# Only need user perms because no become was used here
res = self._remote_chmod(remote_paths, 'u+x')
if res['rc'] != 0:
raise AnsibleError('Failed to set execute bit on remote files (rc: {0}, err: {1})'.format(res['rc'], to_native(res['stderr'])))
return remote_paths
def _remote_chmod(self, paths, mode, sudoable=False):
'''
Issue a remote chmod command
'''
cmd = self._connection._shell.chmod(paths, mode)
res = self._low_level_execute_command(cmd, sudoable=sudoable)
return res
def _remote_chown(self, paths, user, sudoable=False):
'''
Issue a remote chown command
'''
cmd = self._connection._shell.chown(paths, user)
res = self._low_level_execute_command(cmd, sudoable=sudoable)
return res
def _remote_set_user_facl(self, paths, user, mode, sudoable=False):
'''
Issue a remote call to setfacl
'''
cmd = self._connection._shell.set_user_facl(paths, user, mode)
res = self._low_level_execute_command(cmd, sudoable=sudoable)
return res
def _execute_remote_stat(self, path, all_vars, follow, tmp=None, checksum=True):
'''
Get information from remote file.
'''
if tmp is not None:
display.warning('_execute_remote_stat no longer honors the tmp parameter. Action'
' plugins should set self._connection._shell.tmpdir to share'
' the tmpdir')
del tmp # No longer used
module_args = dict(
path=path,
follow=follow,
get_checksum=checksum,
checksum_algorithm='sha1',
)
mystat = self._execute_module(module_name='stat', module_args=module_args, task_vars=all_vars,
wrap_async=False)
if mystat.get('failed'):
msg = mystat.get('module_stderr')
if not msg:
msg = mystat.get('module_stdout')
if not msg:
msg = mystat.get('msg')
raise AnsibleError('Failed to get information on remote file (%s): %s' % (path, msg))
if not mystat['stat']['exists']:
# empty might be matched, 1 should never match, also backwards compatible
mystat['stat']['checksum'] = '1'
# happens sometimes when it is a dir and not on bsd
if 'checksum' not in mystat['stat']:
mystat['stat']['checksum'] = ''
elif not isinstance(mystat['stat']['checksum'], string_types):
raise AnsibleError("Invalid checksum returned by stat: expected a string type but got %s" % type(mystat['stat']['checksum']))
return mystat['stat']
def _remote_checksum(self, path, all_vars, follow=False):
'''
Produces a remote checksum given a path,
Returns a number 0-4 for specific errors instead of checksum, also ensures it is different
0 = unknown error
1 = file does not exist, this might not be an error
2 = permissions issue
3 = its a directory, not a file
4 = stat module failed, likely due to not finding python
5 = appropriate json module not found
'''
x = "0" # unknown error has occurred
try:
remote_stat = self._execute_remote_stat(path, all_vars, follow=follow)
if remote_stat['exists'] and remote_stat['isdir']:
x = "3" # its a directory not a file
else:
x = remote_stat['checksum'] # if 1, file is missing
except AnsibleError as e:
errormsg = to_text(e)
if errormsg.endswith(u'Permission denied'):
x = "2" # cannot read file
elif errormsg.endswith(u'MODULE FAILURE'):
x = "4" # python not found or module uncaught exception
elif 'json' in errormsg:
x = "5" # json module needed
finally:
return x # pylint: disable=lost-exception
def _remote_expand_user(self, path, sudoable=True, pathsep=None):
''' takes a remote path and performs tilde/$HOME expansion on the remote host '''
# We only expand ~/path and ~username/path
if not path.startswith('~'):
return path
# Per Jborean, we don't have to worry about Windows as we don't have a notion of user's home
# dir there.
split_path = path.split(os.path.sep, 1)
expand_path = split_path[0]
if expand_path == '~':
# Network connection plugins (network_cli, netconf, etc.) execute on the controller, rather than the remote host.
# As such, we want to avoid using remote_user for paths as remote_user may not line up with the local user
# This is a hack and should be solved by more intelligent handling of remote_tmp in 2.7
become_user = self.get_become_option('become_user')
if getattr(self._connection, '_remote_is_local', False):
pass
elif sudoable and self._connection.become and become_user:
expand_path = '~%s' % become_user
else:
# use remote user instead, if none set default to current user
expand_path = '~%s' % (self._get_remote_user() or '')
# use shell to construct appropriate command and execute
cmd = self._connection._shell.expand_user(expand_path)
data = self._low_level_execute_command(cmd, sudoable=False)
try:
initial_fragment = data['stdout'].strip().splitlines()[-1]
except IndexError:
initial_fragment = None
if not initial_fragment:
# Something went wrong trying to expand the path remotely. Try using pwd, if not, return
# the original string
cmd = self._connection._shell.pwd()
pwd = self._low_level_execute_command(cmd, sudoable=False).get('stdout', '').strip()
if pwd:
expanded = pwd
else:
expanded = path
elif len(split_path) > 1:
expanded = self._connection._shell.join_path(initial_fragment, *split_path[1:])
else:
expanded = initial_fragment
if '..' in os.path.dirname(expanded).split('/'):
raise AnsibleError("'%s' returned an invalid relative home directory path containing '..'" % self._play_context.remote_addr)
return expanded
def _strip_success_message(self, data):
'''
Removes the BECOME-SUCCESS message from the data.
'''
if data.strip().startswith('BECOME-SUCCESS-'):
data = re.sub(r'^((\r)?\n)?BECOME-SUCCESS.*(\r)?\n', '', data)
return data
def _update_module_args(self, module_name, module_args, task_vars):
# set check mode in the module arguments, if required
if self._play_context.check_mode:
if not self._supports_check_mode:
raise AnsibleError("check mode is not supported for this operation")
module_args['_ansible_check_mode'] = True
else:
module_args['_ansible_check_mode'] = False
# set no log in the module arguments, if required
no_target_syslog = C.config.get_config_value('DEFAULT_NO_TARGET_SYSLOG', variables=task_vars)
module_args['_ansible_no_log'] = self._play_context.no_log or no_target_syslog
# set debug in the module arguments, if required
module_args['_ansible_debug'] = C.DEFAULT_DEBUG
# let module know we are in diff mode
module_args['_ansible_diff'] = self._play_context.diff
# let module know our verbosity
module_args['_ansible_verbosity'] = display.verbosity
# give the module information about the ansible version
module_args['_ansible_version'] = __version__
# give the module information about its name
module_args['_ansible_module_name'] = module_name
# set the syslog facility to be used in the module
module_args['_ansible_syslog_facility'] = task_vars.get('ansible_syslog_facility', C.DEFAULT_SYSLOG_FACILITY)
# let module know about filesystems that selinux treats specially
module_args['_ansible_selinux_special_fs'] = C.DEFAULT_SELINUX_SPECIAL_FS
# what to do when parameter values are converted to strings
module_args['_ansible_string_conversion_action'] = C.STRING_CONVERSION_ACTION
# give the module the socket for persistent connections
module_args['_ansible_socket'] = getattr(self._connection, 'socket_path')
if not module_args['_ansible_socket']:
module_args['_ansible_socket'] = task_vars.get('ansible_socket')
# make sure all commands use the designated shell executable
module_args['_ansible_shell_executable'] = self._play_context.executable
# make sure modules are aware if they need to keep the remote files
module_args['_ansible_keep_remote_files'] = C.DEFAULT_KEEP_REMOTE_FILES
# make sure all commands use the designated temporary directory if created
if self._is_become_unprivileged(): # force fallback on remote_tmp as user cannot normally write to dir
module_args['_ansible_tmpdir'] = None
else:
module_args['_ansible_tmpdir'] = self._connection._shell.tmpdir
# make sure the remote_tmp value is sent through in case modules needs to create their own
module_args['_ansible_remote_tmp'] = self.get_shell_option('remote_tmp', default='~/.ansible/tmp')
def _execute_module(self, module_name=None, module_args=None, tmp=None, task_vars=None, persist_files=False, delete_remote_tmp=None, wrap_async=False):
'''
Transfer and run a module along with its arguments.
'''
if tmp is not None:
display.warning('_execute_module no longer honors the tmp parameter. Action plugins'
' should set self._connection._shell.tmpdir to share the tmpdir')
del tmp # No longer used
if delete_remote_tmp is not None:
display.warning('_execute_module no longer honors the delete_remote_tmp parameter.'
' Action plugins should check self._connection._shell.tmpdir to'
' see if a tmpdir existed before they were called to determine'
' if they are responsible for removing it.')
del delete_remote_tmp # No longer used
tmpdir = self._connection._shell.tmpdir
# We set the module_style to new here so the remote_tmp is created
# before the module args are built if remote_tmp is needed (async).
# If the module_style turns out to not be new and we didn't create the
# remote tmp here, it will still be created. This must be done before
# calling self._update_module_args() so the module wrapper has the
# correct remote_tmp value set
if not self._is_pipelining_enabled("new", wrap_async) and tmpdir is None:
self._make_tmp_path()
tmpdir = self._connection._shell.tmpdir
if task_vars is None:
task_vars = dict()
# if a module name was not specified for this execution, use the action from the task
if module_name is None:
module_name = self._task.action
if module_args is None:
module_args = self._task.args
self._update_module_args(module_name, module_args, task_vars)
# FIXME: convert async_wrapper.py to not rely on environment variables
# make sure we get the right async_dir variable, backwards compatibility
# means we need to lookup the env value ANSIBLE_ASYNC_DIR first
remove_async_dir = None
if wrap_async or self._task.async_val:
env_async_dir = [e for e in self._task.environment if
"ANSIBLE_ASYNC_DIR" in e]
if len(env_async_dir) > 0:
msg = "Setting the async dir from the environment keyword " \
"ANSIBLE_ASYNC_DIR is deprecated. Set the async_dir " \
"shell option instead"
self._display.deprecated(msg, "2.12")
else:
# ANSIBLE_ASYNC_DIR is not set on the task, we get the value
# from the shell option and temporarily add to the environment
# list for async_wrapper to pick up
async_dir = self.get_shell_option('async_dir', default="~/.ansible_async")
remove_async_dir = len(self._task.environment)
self._task.environment.append({"ANSIBLE_ASYNC_DIR": async_dir})
# FUTURE: refactor this along with module build process to better encapsulate "smart wrapper" functionality
(module_style, shebang, module_data, module_path) = self._configure_module(module_name=module_name, module_args=module_args, task_vars=task_vars)
display.vvv("Using module file %s" % module_path)
if not shebang and module_style != 'binary':
raise AnsibleError("module (%s) is missing interpreter line" % module_name)
self._used_interpreter = shebang
remote_module_path = None
if not self._is_pipelining_enabled(module_style, wrap_async):
# we might need remote tmp dir
if tmpdir is None:
self._make_tmp_path()
tmpdir = self._connection._shell.tmpdir
remote_module_filename = self._connection._shell.get_remote_filename(module_path)
remote_module_path = self._connection._shell.join_path(tmpdir, 'AnsiballZ_%s' % remote_module_filename)
args_file_path = None
if module_style in ('old', 'non_native_want_json', 'binary'):
# we'll also need a tmp file to hold our module arguments
args_file_path = self._connection._shell.join_path(tmpdir, 'args')
if remote_module_path or module_style != 'new':
display.debug("transferring module to remote %s" % remote_module_path)
if module_style == 'binary':
self._transfer_file(module_path, remote_module_path)
else:
self._transfer_data(remote_module_path, module_data)
if module_style == 'old':
# we need to dump the module args to a k=v string in a file on
# the remote system, which can be read and parsed by the module
args_data = ""
for k, v in iteritems(module_args):
args_data += '%s=%s ' % (k, shlex_quote(text_type(v)))
self._transfer_data(args_file_path, args_data)
elif module_style in ('non_native_want_json', 'binary'):
self._transfer_data(args_file_path, json.dumps(module_args))
display.debug("done transferring module to remote")
environment_string = self._compute_environment_string()
# remove the ANSIBLE_ASYNC_DIR env entry if we added a temporary one for
# the async_wrapper task - this is so the async_status plugin doesn't
# fire a deprecation warning when it runs after this task
if remove_async_dir is not None:
del self._task.environment[remove_async_dir]
remote_files = []
if tmpdir and remote_module_path:
remote_files = [tmpdir, remote_module_path]
if args_file_path:
remote_files.append(args_file_path)
sudoable = True
in_data = None
cmd = ""
if wrap_async and not self._connection.always_pipeline_modules:
# configure, upload, and chmod the async_wrapper module
(async_module_style, shebang, async_module_data, async_module_path) = self._configure_module(module_name='async_wrapper', module_args=dict(),
task_vars=task_vars)
async_module_remote_filename = self._connection._shell.get_remote_filename(async_module_path)
remote_async_module_path = self._connection._shell.join_path(tmpdir, async_module_remote_filename)
self._transfer_data(remote_async_module_path, async_module_data)
remote_files.append(remote_async_module_path)
async_limit = self._task.async_val
async_jid = str(random.randint(0, 999999999999))
# call the interpreter for async_wrapper directly
# this permits use of a script for an interpreter on non-Linux platforms
# TODO: re-implement async_wrapper as a regular module to avoid this special case
interpreter = shebang.replace('#!', '').strip()
async_cmd = [interpreter, remote_async_module_path, async_jid, async_limit, remote_module_path]
if environment_string:
async_cmd.insert(0, environment_string)
if args_file_path:
async_cmd.append(args_file_path)
else:
# maintain a fixed number of positional parameters for async_wrapper
async_cmd.append('_')
if not self._should_remove_tmp_path(tmpdir):
async_cmd.append("-preserve_tmp")
cmd = " ".join(to_text(x) for x in async_cmd)
else:
if self._is_pipelining_enabled(module_style):
in_data = module_data
display.vvv("Pipelining is enabled.")
else:
cmd = remote_module_path
cmd = self._connection._shell.build_module_command(environment_string, shebang, cmd, arg_path=args_file_path).strip()
# Fix permissions of the tmpdir path and tmpdir files. This should be called after all
# files have been transferred.
if remote_files:
# remove none/empty
remote_files = [x for x in remote_files if x]
self._fixup_perms2(remote_files, self._get_remote_user())
# actually execute
res = self._low_level_execute_command(cmd, sudoable=sudoable, in_data=in_data)
# parse the main result
data = self._parse_returned_data(res)
# NOTE: INTERNAL KEYS ONLY ACCESSIBLE HERE
# get internal info before cleaning
if data.pop("_ansible_suppress_tmpdir_delete", False):
self._cleanup_remote_tmp = False
# NOTE: yum returns results .. but that made it 'compatible' with squashing, so we allow mappings, for now
if 'results' in data and (not isinstance(data['results'], Sequence) or isinstance(data['results'], string_types)):
data['ansible_module_results'] = data['results']
del data['results']
display.warning("Found internal 'results' key in module return, renamed to 'ansible_module_results'.")
# remove internal keys
remove_internal_keys(data)
if wrap_async:
# async_wrapper will clean up its tmpdir on its own so we want the controller side to
# forget about it now
self._connection._shell.tmpdir = None
# FIXME: for backwards compat, figure out if still makes sense
data['changed'] = True
# pre-split stdout/stderr into lines if needed
if 'stdout' in data and 'stdout_lines' not in data:
# if the value is 'False', a default won't catch it.
txt = data.get('stdout', None) or u''
data['stdout_lines'] = txt.splitlines()
if 'stderr' in data and 'stderr_lines' not in data:
# if the value is 'False', a default won't catch it.
txt = data.get('stderr', None) or u''
data['stderr_lines'] = txt.splitlines()
# propagate interpreter discovery results back to the controller
if self._discovered_interpreter_key:
if data.get('ansible_facts') is None:
data['ansible_facts'] = {}
data['ansible_facts'][self._discovered_interpreter_key] = self._discovered_interpreter
if self._discovery_warnings:
if data.get('warnings') is None:
data['warnings'] = []
data['warnings'].extend(self._discovery_warnings)
if self._discovery_deprecation_warnings:
if data.get('deprecations') is None:
data['deprecations'] = []
data['deprecations'].extend(self._discovery_deprecation_warnings)
# mark the entire module results untrusted as a template right here, since the current action could
# possibly template one of these values.
data = wrap_var(data)
display.debug("done with _execute_module (%s, %s)" % (module_name, module_args))
return data
def _parse_returned_data(self, res):
try:
filtered_output, warnings = _filter_non_json_lines(res.get('stdout', u''))
for w in warnings:
display.warning(w)
data = json.loads(filtered_output)
data['_ansible_parsed'] = True
except ValueError:
# not valid json, lets try to capture error
data = dict(failed=True, _ansible_parsed=False)
data['module_stdout'] = res.get('stdout', u'')
if 'stderr' in res:
data['module_stderr'] = res['stderr']
if res['stderr'].startswith(u'Traceback'):
data['exception'] = res['stderr']
# in some cases a traceback will arrive on stdout instead of stderr, such as when using ssh with -tt
if 'exception' not in data and data['module_stdout'].startswith(u'Traceback'):
data['exception'] = data['module_stdout']
# The default
data['msg'] = "MODULE FAILURE"
# try to figure out if we are missing interpreter
if self._used_interpreter is not None:
match = re.compile('%s: (?:No such file or directory|not found)' % self._used_interpreter.lstrip('!#'))
if match.search(data['module_stderr']) or match.search(data['module_stdout']):
data['msg'] = "The module failed to execute correctly, you probably need to set the interpreter."
# always append hint
data['msg'] += '\nSee stdout/stderr for the exact error'
if 'rc' in res:
data['rc'] = res['rc']
return data
# FIXME: move to connection base
def _low_level_execute_command(self, cmd, sudoable=True, in_data=None, executable=None, encoding_errors='surrogate_then_replace', chdir=None):
'''
This is the function which executes the low level shell command, which
may be commands to create/remove directories for temporary files, or to
run the module code or python directly when pipelining.
:kwarg encoding_errors: If the value returned by the command isn't
utf-8 then we have to figure out how to transform it to unicode.
If the value is just going to be displayed to the user (or
discarded) then the default of 'replace' is fine. If the data is
used as a key or is going to be written back out to a file
verbatim, then this won't work. May have to use some sort of
replacement strategy (python3 could use surrogateescape)
:kwarg chdir: cd into this directory before executing the command.
'''
display.debug("_low_level_execute_command(): starting")
# if not cmd:
# # this can happen with powershell modules when there is no analog to a Windows command (like chmod)
# display.debug("_low_level_execute_command(): no command, exiting")
# return dict(stdout='', stderr='', rc=254)
if chdir:
display.debug("_low_level_execute_command(): changing cwd to %s for this command" % chdir)
cmd = self._connection._shell.append_command('cd %s' % chdir, cmd)
# https://github.com/ansible/ansible/issues/68054
if executable:
self._connection._shell.executable = executable
ruser = self._get_remote_user()
buser = self.get_become_option('become_user')
if (sudoable and self._connection.become and # if sudoable and have become
resource_from_fqcr(self._connection.transport) != 'network_cli' and # if not using network_cli
(C.BECOME_ALLOW_SAME_USER or (buser != ruser or not any((ruser, buser))))): # if we allow same user PE or users are different and either is set
display.debug("_low_level_execute_command(): using become for this command")
cmd = self._connection.become.build_become_command(cmd, self._connection._shell)
if self._connection.allow_executable:
if executable is None:
executable = self._play_context.executable
# mitigation for SSH race which can drop stdout (https://github.com/ansible/ansible/issues/13876)
# only applied for the default executable to avoid interfering with the raw action
cmd = self._connection._shell.append_command(cmd, 'sleep 0')
if executable:
cmd = executable + ' -c ' + shlex_quote(cmd)
display.debug("_low_level_execute_command(): executing: %s" % (cmd,))
# Change directory to basedir of task for command execution when connection is local
if self._connection.transport == 'local':
self._connection.cwd = to_bytes(self._loader.get_basedir(), errors='surrogate_or_strict')
rc, stdout, stderr = self._connection.exec_command(cmd, in_data=in_data, sudoable=sudoable)
# stdout and stderr may be either a file-like or a bytes object.
# Convert either one to a text type
if isinstance(stdout, binary_type):
out = to_text(stdout, errors=encoding_errors)
elif not isinstance(stdout, text_type):
out = to_text(b''.join(stdout.readlines()), errors=encoding_errors)
else:
out = stdout
if isinstance(stderr, binary_type):
err = to_text(stderr, errors=encoding_errors)
elif not isinstance(stderr, text_type):
err = to_text(b''.join(stderr.readlines()), errors=encoding_errors)
else:
err = stderr
if rc is None:
rc = 0
# be sure to remove the BECOME-SUCCESS message now
out = self._strip_success_message(out)
display.debug(u"_low_level_execute_command() done: rc=%d, stdout=%s, stderr=%s" % (rc, out, err))
return dict(rc=rc, stdout=out, stdout_lines=out.splitlines(), stderr=err, stderr_lines=err.splitlines())
def _get_diff_data(self, destination, source, task_vars, source_file=True):
# Note: Since we do not diff the source and destination before we transform from bytes into
# text the diff between source and destination may not be accurate. To fix this, we'd need
# to move the diffing from the callback plugins into here.
#
# Example of data which would cause trouble is src_content == b'\xff' and dest_content ==
# b'\xfe'. Neither of those are valid utf-8 so both get turned into the replacement
# character: diff['before'] = u'�' ; diff['after'] = u'�' When the callback plugin later
# diffs before and after it shows an empty diff.
diff = {}
display.debug("Going to peek to see if file has changed permissions")
peek_result = self._execute_module(module_name='file', module_args=dict(path=destination, _diff_peek=True), task_vars=task_vars, persist_files=True)
if peek_result.get('failed', False):
display.warning(u"Failed to get diff between '%s' and '%s': %s" % (os.path.basename(source), destination, to_text(peek_result.get(u'msg', u''))))
return diff
if peek_result.get('rc', 0) == 0:
if peek_result.get('state') in (None, 'absent'):
diff['before'] = u''
elif peek_result.get('appears_binary'):
diff['dst_binary'] = 1
elif peek_result.get('size') and C.MAX_FILE_SIZE_FOR_DIFF > 0 and peek_result['size'] > C.MAX_FILE_SIZE_FOR_DIFF:
diff['dst_larger'] = C.MAX_FILE_SIZE_FOR_DIFF
else:
display.debug(u"Slurping the file %s" % source)
dest_result = self._execute_module(module_name='slurp', module_args=dict(path=destination), task_vars=task_vars, persist_files=True)
if 'content' in dest_result:
dest_contents = dest_result['content']
if dest_result['encoding'] == u'base64':
dest_contents = base64.b64decode(dest_contents)
else:
raise AnsibleError("unknown encoding in content option, failed: %s" % to_native(dest_result))
diff['before_header'] = destination
diff['before'] = to_text(dest_contents)
if source_file:
st = os.stat(source)
if C.MAX_FILE_SIZE_FOR_DIFF > 0 and st[stat.ST_SIZE] > C.MAX_FILE_SIZE_FOR_DIFF:
diff['src_larger'] = C.MAX_FILE_SIZE_FOR_DIFF
else:
display.debug("Reading local copy of the file %s" % source)
try:
with open(source, 'rb') as src:
src_contents = src.read()
except Exception as e:
raise AnsibleError("Unexpected error while reading source (%s) for diff: %s " % (source, to_native(e)))
if b"\x00" in src_contents:
diff['src_binary'] = 1
else:
diff['after_header'] = source
diff['after'] = to_text(src_contents)
else:
display.debug(u"source of file passed in")
diff['after_header'] = u'dynamically generated'
diff['after'] = source
if self._play_context.no_log:
if 'before' in diff:
diff["before"] = u""
if 'after' in diff:
diff["after"] = u" [[ Diff output has been hidden because 'no_log: true' was specified for this result ]]\n"
return diff
def _find_needle(self, dirname, needle):
'''
find a needle in haystack of paths, optionally using 'dirname' as a subdir.
This will build the ordered list of paths to search and pass them to dwim
to get back the first existing file found.
'''
# dwim already deals with playbook basedirs
path_stack = self._task.get_search_path()
# if missing it will return a file not found exception
return self._loader.path_dwim_relative_stack(path_stack, dirname, needle)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,574 |
Connection plugins broke in devel branch
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Podman connection plugin started to fail after May 14 2020
Seems like https://github.com/ansible/ansible/commit/2165f9ac40cf212891b11a75bd9b9b2f4f0b8dc3 broke it
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible-playbook 2.10.0.dev0
config file = /home/sshnaidm/.ansible.cfg
configured module search path = ['/home/sshnaidm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/sshnaidm/venvs/ansible-dev/lib/python3.7/site-packages/ansible
executable location = /home/sshnaidm/venvs/ansible-dev/bin/ansible-playbook
python version = 3.7.7 (default, Mar 13 2020, 10:23:39) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
commit hash: 01e7915b0a9778a934a0f0e9e9d110dbef7e31ec
```
##### STEPS TO REPRODUCE
```bash
# install podman
podman run -d --rm --name "podman-container" python:3-alpine sleep 1d
git clone https://github.com/containers/ansible-podman-collections
cd ansible-podman-collections/tests/integration/targets/connection_podman
# install collection
# run twice with 2.9 (it pass) and 2.10dev from devel (it fails)
ANSIBLE_DEBUG=1 ansible-playbook -vvvvv ../connection/test_connection.yml -i test_connection.inventory
-e target_hosts=podman -e action_prefix= -e local_tmp=/tmp/ansible-local -e remote_tmp=/tmp/ansible-remote
```
Testing playbook
```yaml
- hosts: "{{ target_hosts }}"
gather_facts: no
serial: 1
tasks:
### raw with unicode arg and output
- name: raw with unicode arg and output
raw: echo 汉语
register: command
- name: check output of raw with unicode arg and output
assert:
that:
- "'汉语' in command.stdout"
- command is changed # as of 2.2, raw should default to changed: true for consistency w/ shell/command/script modules
### copy local file with unicode filename and content
- name: create local file with unicode filename and content
local_action: lineinfile dest={{ local_tmp }}-汉语/汉语.txt create=true line=汉语
- name: remove remote file with unicode filename and content
action: "{{ action_prefix }}file path={{ remote_tmp }}-汉语/汉语.txt state=absent"
# [skip]
```
Inventory
```
[podman]
podman-container
[podman:vars]
ansible_host=podman-container
ansible_connection=containers.podman.podman
ansible_python_interpreter=/usr/local/bin/python
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
It pass on both 2.9 and 2.10
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Paste of passed result on 2.9: https://pastebin.com/cmbfEH1k
Paste of failed result on 2.10: https://pastebin.com/RER8SBm6
Diff between them, just for better visibility: https://linediff.com/?id=5ec170a6687f4bf1358b4567
!component =lib/ansible/executor/task_executor.py
This is the failed log:
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [create local file with unicode filename and content] ********************************************************************************************************************************************************
task path: /home/sshnaidm/sources/ansible-podman-collections/tests/integration/targets/connection/test_connection.yml:19
sending task start callback
entering _queue_task() for podman-container/lineinfile
Creating lock for lineinfile
worker is 1 (out of 1 available)
exiting _queue_task() for podman-container/lineinfile
running TaskExecutor() for podman-container/TASK: create local file with unicode filename and content
done queuing things up, now waiting for results queue to drain
waiting for pending results...
in run() - task 54e1addb-4632-6ef6-342a-00000000000a
variable 'ansible_connection' from source: group vars, precedence entry 'groups_inventory'
variable 'ansible_search_path' from source: unknown
variable '_ansible_loop_cache' from source: unknown
calling self._execute()
variable 'ansible_delegated_vars' from source: unknown
variable 'ansible_connection' from source: host vars for 'localhost'
no remote address found for delegated host localhost
using its name, so success depends on DNS resolution
variable 'omit' from source: magic vars
variable 'omit' from source: magic vars
variable 'omit' from source: magic vars
Loading FilterModule 'core' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/core.py (found_in_cache=True, class_only=False)
Loading FilterModule 'mathstuff' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/mathstuff.py (found_in_cache=True, class_only=False)
Loading FilterModule 'urls' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/urls.py (found_in_cache=True, class_only=False)
Loading FilterModule 'urlsplit' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/urlsplit.py (found_in_cache=True, class_only=False)
Loading TestModule 'core' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/test/core.py (found_in_cache=True, class_only=False)
Loading TestModule 'files' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/test/files.py (found_in_cache=True, class_only=False)
Loading TestModule 'mathstuff' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/test/mathstuff.py (found_in_cache=True, class_only=False)
variable 'local_tmp' from source: extra vars
variable 'ansible_delegated_vars' from source: unknown
variable 'ansible_connection' from source: host vars for 'localhost'
trying /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/connection
Loading Connection 'local' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/connection/local.py (found_in_cache=True, class_only=False)
trying /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/shell
Loading ShellModule 'sh' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/shell/sh.py (found_in_cache=True, class_only=False)
Loading ShellModule 'sh' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/shell/sh.py (found_in_cache=True, class_only=False)
variable 'ansible_delegated_vars' from source: unknown
Loading ActionModule 'normal' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/action/normal.py (searched paths: /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/action/__pycache__:/home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/action)
variable 'omit' from source: magic vars
starting attempt loop
running the handler
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c 'echo ~ && sleep 0'
<podman-container> ESTABLISH LOCAL CONNECTION FOR USER: sshnaidm
in local.exec_command()
<podman-container> EXEC /bin/sh -c 'echo ~ && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=/home/sshnaidm
, stderr=
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c '( umask 77 && mkdir -p "` echo /home/sshnaidm/.ansible/tmp `"&& mkdir /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 && echo ansible-tmp-1589733889.9242184-4036062-66574237396791="` echo /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 `" ) && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/sshnaidm/.ansible/tmp `"&& mkdir /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 && echo ansible-tmp-1589733889.9242184-4036062-66574237396791="` echo /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 `" ) && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=ansible-tmp-1589733889.9242184-4036062-66574237396791=/home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791
, stderr=
trying /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/modules
ANSIBALLZ: Using lock for lineinfile
ANSIBALLZ: Acquiring lock
ANSIBALLZ: Lock acquired: 139849027519056
ANSIBALLZ: Creating module
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/basic.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/_text.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/_json_compat.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/six/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/compat/selectors.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/text/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/validation.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/_utils.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/compat/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/warnings.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/file.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/parsing/convert_bool.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/_collections_compat.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/text/converters.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/process.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/sys_info.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/text/formatters.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/pycompat24.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/parsing/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/parameters.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/compat/_selectors2.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/collections.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/distro/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/distro/_distro.py
ANSIBALLZ: Writing module into payload
ANSIBALLZ: Writing module
ANSIBALLZ: Renaming module
ANSIBALLZ: Done creating module
variable 'ansible_python_interpreter' from source: group vars, precedence entry 'groups_inventory'
variable 'ansible_facts' from source: unknown
Using module file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/modules/lineinfile.py
transferring module to remote /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py
<podman-container> PUT /home/sshnaidm/.ansible/tmp/ansible-local-4035979hft6_xtg/tmp48y3pu6p TO /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py
done transferring module to remote
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c 'chmod u+x /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c 'chmod u+x /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=, stderr=
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c '/usr/local/bin/python /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c '/usr/local/bin/python /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=127, stdout=, stderr=/bin/sh: /usr/local/bin/python: No such file or directory
done with _execute_module (lineinfile, {'dest': '/tmp/ansible-local-汉语/汉语.txt', 'create': 'true', 'line': '汉语', '_ansible_check_mode': False, '_ansible_no_log': False, '_ansible_debug': True, '_ansible_diff': False, '_ansible_verbosity': 5, '_ansible_version': '2.10.0.dev0', '_ansible_module_name': 'lineinfile', '_ansible_syslog_facility': 'LOG_USER', '_ansible_selinux_special_fs': ['fuse', 'nfs', 'vboxsf', 'ramfs', '9p', 'vfat'], '_ansible_string_conversion_action': 'warn', '_ansible_socket': None, '_ansible_shell_executable': '/bin/sh', '_ansible_keep_remote_files': False, '_ansible_tmpdir': '/home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/', '_ansible_remote_tmp': '~/.ansible/tmp'})
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c 'rm -f -r /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ > /dev/null 2>&1 && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c 'rm -f -r /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ > /dev/null 2>&1 && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=, stderr=
handler run complete
attempt loop complete, returning result
_execute() done
dumping result to json
done dumping result, returning
done running TaskExecutor() for podman-container/TASK: create local file with unicode filename and content [54e1addb-4632-6ef6-342a-00000000000a]
sending task result for task 54e1addb-4632-6ef6-342a-00000000000a
done sending task result for task 54e1addb-4632-6ef6-342a-00000000000a
WORKER PROCESS EXITING
marking podman-container as failed
marking host podman-container failed, current state: HOST STATE: block=2, task=3, rescue=0, always=0, run_state=ITERATING_TASKS, fail_state=FAILED_NONE, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
^ failed state is now: HOST STATE: block=2, task=3, rescue=0, always=0, run_state=ITERATING_COMPLETE, fail_state=FAILED_TASKS, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
getting the next task for host podman-container
host podman-container is done iterating, returning
fatal: [podman-container]: FAILED! => {
"changed": false,
"rc": 127
}
MSG:
The module failed to execute correctly, you probably need to set the interpreter.
See stdout/stderr for the exact error
MODULE_STDERR:
/bin/sh: /usr/local/bin/python: No such file or directory
```
|
https://github.com/ansible/ansible/issues/69574
|
https://github.com/ansible/ansible/pull/69604
|
dc63b365011a583b9e9bcd60d1fad6fb10b553c7
|
de3f7c7739851852dec8ea99a76c353317270b70
| 2020-05-17T17:15:28Z |
python
| 2020-05-22T13:31:34Z |
test/integration/targets/delegate_to/inventory_interpreters
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,574 |
Connection plugins broke in devel branch
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Podman connection plugin started to fail after May 14 2020
Seems like https://github.com/ansible/ansible/commit/2165f9ac40cf212891b11a75bd9b9b2f4f0b8dc3 broke it
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible-playbook 2.10.0.dev0
config file = /home/sshnaidm/.ansible.cfg
configured module search path = ['/home/sshnaidm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/sshnaidm/venvs/ansible-dev/lib/python3.7/site-packages/ansible
executable location = /home/sshnaidm/venvs/ansible-dev/bin/ansible-playbook
python version = 3.7.7 (default, Mar 13 2020, 10:23:39) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
commit hash: 01e7915b0a9778a934a0f0e9e9d110dbef7e31ec
```
##### STEPS TO REPRODUCE
```bash
# install podman
podman run -d --rm --name "podman-container" python:3-alpine sleep 1d
git clone https://github.com/containers/ansible-podman-collections
cd ansible-podman-collections/tests/integration/targets/connection_podman
# install collection
# run twice with 2.9 (it pass) and 2.10dev from devel (it fails)
ANSIBLE_DEBUG=1 ansible-playbook -vvvvv ../connection/test_connection.yml -i test_connection.inventory
-e target_hosts=podman -e action_prefix= -e local_tmp=/tmp/ansible-local -e remote_tmp=/tmp/ansible-remote
```
Testing playbook
```yaml
- hosts: "{{ target_hosts }}"
gather_facts: no
serial: 1
tasks:
### raw with unicode arg and output
- name: raw with unicode arg and output
raw: echo 汉语
register: command
- name: check output of raw with unicode arg and output
assert:
that:
- "'汉语' in command.stdout"
- command is changed # as of 2.2, raw should default to changed: true for consistency w/ shell/command/script modules
### copy local file with unicode filename and content
- name: create local file with unicode filename and content
local_action: lineinfile dest={{ local_tmp }}-汉语/汉语.txt create=true line=汉语
- name: remove remote file with unicode filename and content
action: "{{ action_prefix }}file path={{ remote_tmp }}-汉语/汉语.txt state=absent"
# [skip]
```
Inventory
```
[podman]
podman-container
[podman:vars]
ansible_host=podman-container
ansible_connection=containers.podman.podman
ansible_python_interpreter=/usr/local/bin/python
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
It pass on both 2.9 and 2.10
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Paste of passed result on 2.9: https://pastebin.com/cmbfEH1k
Paste of failed result on 2.10: https://pastebin.com/RER8SBm6
Diff between them, just for better visibility: https://linediff.com/?id=5ec170a6687f4bf1358b4567
!component =lib/ansible/executor/task_executor.py
This is the failed log:
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [create local file with unicode filename and content] ********************************************************************************************************************************************************
task path: /home/sshnaidm/sources/ansible-podman-collections/tests/integration/targets/connection/test_connection.yml:19
sending task start callback
entering _queue_task() for podman-container/lineinfile
Creating lock for lineinfile
worker is 1 (out of 1 available)
exiting _queue_task() for podman-container/lineinfile
running TaskExecutor() for podman-container/TASK: create local file with unicode filename and content
done queuing things up, now waiting for results queue to drain
waiting for pending results...
in run() - task 54e1addb-4632-6ef6-342a-00000000000a
variable 'ansible_connection' from source: group vars, precedence entry 'groups_inventory'
variable 'ansible_search_path' from source: unknown
variable '_ansible_loop_cache' from source: unknown
calling self._execute()
variable 'ansible_delegated_vars' from source: unknown
variable 'ansible_connection' from source: host vars for 'localhost'
no remote address found for delegated host localhost
using its name, so success depends on DNS resolution
variable 'omit' from source: magic vars
variable 'omit' from source: magic vars
variable 'omit' from source: magic vars
Loading FilterModule 'core' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/core.py (found_in_cache=True, class_only=False)
Loading FilterModule 'mathstuff' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/mathstuff.py (found_in_cache=True, class_only=False)
Loading FilterModule 'urls' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/urls.py (found_in_cache=True, class_only=False)
Loading FilterModule 'urlsplit' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/urlsplit.py (found_in_cache=True, class_only=False)
Loading TestModule 'core' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/test/core.py (found_in_cache=True, class_only=False)
Loading TestModule 'files' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/test/files.py (found_in_cache=True, class_only=False)
Loading TestModule 'mathstuff' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/test/mathstuff.py (found_in_cache=True, class_only=False)
variable 'local_tmp' from source: extra vars
variable 'ansible_delegated_vars' from source: unknown
variable 'ansible_connection' from source: host vars for 'localhost'
trying /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/connection
Loading Connection 'local' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/connection/local.py (found_in_cache=True, class_only=False)
trying /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/shell
Loading ShellModule 'sh' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/shell/sh.py (found_in_cache=True, class_only=False)
Loading ShellModule 'sh' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/shell/sh.py (found_in_cache=True, class_only=False)
variable 'ansible_delegated_vars' from source: unknown
Loading ActionModule 'normal' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/action/normal.py (searched paths: /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/action/__pycache__:/home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/action)
variable 'omit' from source: magic vars
starting attempt loop
running the handler
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c 'echo ~ && sleep 0'
<podman-container> ESTABLISH LOCAL CONNECTION FOR USER: sshnaidm
in local.exec_command()
<podman-container> EXEC /bin/sh -c 'echo ~ && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=/home/sshnaidm
, stderr=
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c '( umask 77 && mkdir -p "` echo /home/sshnaidm/.ansible/tmp `"&& mkdir /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 && echo ansible-tmp-1589733889.9242184-4036062-66574237396791="` echo /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 `" ) && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/sshnaidm/.ansible/tmp `"&& mkdir /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 && echo ansible-tmp-1589733889.9242184-4036062-66574237396791="` echo /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 `" ) && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=ansible-tmp-1589733889.9242184-4036062-66574237396791=/home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791
, stderr=
trying /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/modules
ANSIBALLZ: Using lock for lineinfile
ANSIBALLZ: Acquiring lock
ANSIBALLZ: Lock acquired: 139849027519056
ANSIBALLZ: Creating module
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/basic.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/_text.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/_json_compat.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/six/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/compat/selectors.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/text/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/validation.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/_utils.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/compat/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/warnings.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/file.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/parsing/convert_bool.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/_collections_compat.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/text/converters.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/process.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/sys_info.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/text/formatters.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/pycompat24.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/parsing/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/parameters.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/compat/_selectors2.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/collections.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/distro/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/distro/_distro.py
ANSIBALLZ: Writing module into payload
ANSIBALLZ: Writing module
ANSIBALLZ: Renaming module
ANSIBALLZ: Done creating module
variable 'ansible_python_interpreter' from source: group vars, precedence entry 'groups_inventory'
variable 'ansible_facts' from source: unknown
Using module file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/modules/lineinfile.py
transferring module to remote /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py
<podman-container> PUT /home/sshnaidm/.ansible/tmp/ansible-local-4035979hft6_xtg/tmp48y3pu6p TO /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py
done transferring module to remote
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c 'chmod u+x /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c 'chmod u+x /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=, stderr=
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c '/usr/local/bin/python /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c '/usr/local/bin/python /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=127, stdout=, stderr=/bin/sh: /usr/local/bin/python: No such file or directory
done with _execute_module (lineinfile, {'dest': '/tmp/ansible-local-汉语/汉语.txt', 'create': 'true', 'line': '汉语', '_ansible_check_mode': False, '_ansible_no_log': False, '_ansible_debug': True, '_ansible_diff': False, '_ansible_verbosity': 5, '_ansible_version': '2.10.0.dev0', '_ansible_module_name': 'lineinfile', '_ansible_syslog_facility': 'LOG_USER', '_ansible_selinux_special_fs': ['fuse', 'nfs', 'vboxsf', 'ramfs', '9p', 'vfat'], '_ansible_string_conversion_action': 'warn', '_ansible_socket': None, '_ansible_shell_executable': '/bin/sh', '_ansible_keep_remote_files': False, '_ansible_tmpdir': '/home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/', '_ansible_remote_tmp': '~/.ansible/tmp'})
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c 'rm -f -r /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ > /dev/null 2>&1 && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c 'rm -f -r /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ > /dev/null 2>&1 && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=, stderr=
handler run complete
attempt loop complete, returning result
_execute() done
dumping result to json
done dumping result, returning
done running TaskExecutor() for podman-container/TASK: create local file with unicode filename and content [54e1addb-4632-6ef6-342a-00000000000a]
sending task result for task 54e1addb-4632-6ef6-342a-00000000000a
done sending task result for task 54e1addb-4632-6ef6-342a-00000000000a
WORKER PROCESS EXITING
marking podman-container as failed
marking host podman-container failed, current state: HOST STATE: block=2, task=3, rescue=0, always=0, run_state=ITERATING_TASKS, fail_state=FAILED_NONE, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
^ failed state is now: HOST STATE: block=2, task=3, rescue=0, always=0, run_state=ITERATING_COMPLETE, fail_state=FAILED_TASKS, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
getting the next task for host podman-container
host podman-container is done iterating, returning
fatal: [podman-container]: FAILED! => {
"changed": false,
"rc": 127
}
MSG:
The module failed to execute correctly, you probably need to set the interpreter.
See stdout/stderr for the exact error
MODULE_STDERR:
/bin/sh: /usr/local/bin/python: No such file or directory
```
|
https://github.com/ansible/ansible/issues/69574
|
https://github.com/ansible/ansible/pull/69604
|
dc63b365011a583b9e9bcd60d1fad6fb10b553c7
|
de3f7c7739851852dec8ea99a76c353317270b70
| 2020-05-17T17:15:28Z |
python
| 2020-05-22T13:31:34Z |
test/integration/targets/delegate_to/library/detect_interpreter.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,574 |
Connection plugins broke in devel branch
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Podman connection plugin started to fail after May 14 2020
Seems like https://github.com/ansible/ansible/commit/2165f9ac40cf212891b11a75bd9b9b2f4f0b8dc3 broke it
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible-playbook 2.10.0.dev0
config file = /home/sshnaidm/.ansible.cfg
configured module search path = ['/home/sshnaidm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/sshnaidm/venvs/ansible-dev/lib/python3.7/site-packages/ansible
executable location = /home/sshnaidm/venvs/ansible-dev/bin/ansible-playbook
python version = 3.7.7 (default, Mar 13 2020, 10:23:39) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
commit hash: 01e7915b0a9778a934a0f0e9e9d110dbef7e31ec
```
##### STEPS TO REPRODUCE
```bash
# install podman
podman run -d --rm --name "podman-container" python:3-alpine sleep 1d
git clone https://github.com/containers/ansible-podman-collections
cd ansible-podman-collections/tests/integration/targets/connection_podman
# install collection
# run twice with 2.9 (it pass) and 2.10dev from devel (it fails)
ANSIBLE_DEBUG=1 ansible-playbook -vvvvv ../connection/test_connection.yml -i test_connection.inventory
-e target_hosts=podman -e action_prefix= -e local_tmp=/tmp/ansible-local -e remote_tmp=/tmp/ansible-remote
```
Testing playbook
```yaml
- hosts: "{{ target_hosts }}"
gather_facts: no
serial: 1
tasks:
### raw with unicode arg and output
- name: raw with unicode arg and output
raw: echo 汉语
register: command
- name: check output of raw with unicode arg and output
assert:
that:
- "'汉语' in command.stdout"
- command is changed # as of 2.2, raw should default to changed: true for consistency w/ shell/command/script modules
### copy local file with unicode filename and content
- name: create local file with unicode filename and content
local_action: lineinfile dest={{ local_tmp }}-汉语/汉语.txt create=true line=汉语
- name: remove remote file with unicode filename and content
action: "{{ action_prefix }}file path={{ remote_tmp }}-汉语/汉语.txt state=absent"
# [skip]
```
Inventory
```
[podman]
podman-container
[podman:vars]
ansible_host=podman-container
ansible_connection=containers.podman.podman
ansible_python_interpreter=/usr/local/bin/python
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
It pass on both 2.9 and 2.10
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Paste of passed result on 2.9: https://pastebin.com/cmbfEH1k
Paste of failed result on 2.10: https://pastebin.com/RER8SBm6
Diff between them, just for better visibility: https://linediff.com/?id=5ec170a6687f4bf1358b4567
!component =lib/ansible/executor/task_executor.py
This is the failed log:
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [create local file with unicode filename and content] ********************************************************************************************************************************************************
task path: /home/sshnaidm/sources/ansible-podman-collections/tests/integration/targets/connection/test_connection.yml:19
sending task start callback
entering _queue_task() for podman-container/lineinfile
Creating lock for lineinfile
worker is 1 (out of 1 available)
exiting _queue_task() for podman-container/lineinfile
running TaskExecutor() for podman-container/TASK: create local file with unicode filename and content
done queuing things up, now waiting for results queue to drain
waiting for pending results...
in run() - task 54e1addb-4632-6ef6-342a-00000000000a
variable 'ansible_connection' from source: group vars, precedence entry 'groups_inventory'
variable 'ansible_search_path' from source: unknown
variable '_ansible_loop_cache' from source: unknown
calling self._execute()
variable 'ansible_delegated_vars' from source: unknown
variable 'ansible_connection' from source: host vars for 'localhost'
no remote address found for delegated host localhost
using its name, so success depends on DNS resolution
variable 'omit' from source: magic vars
variable 'omit' from source: magic vars
variable 'omit' from source: magic vars
Loading FilterModule 'core' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/core.py (found_in_cache=True, class_only=False)
Loading FilterModule 'mathstuff' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/mathstuff.py (found_in_cache=True, class_only=False)
Loading FilterModule 'urls' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/urls.py (found_in_cache=True, class_only=False)
Loading FilterModule 'urlsplit' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/urlsplit.py (found_in_cache=True, class_only=False)
Loading TestModule 'core' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/test/core.py (found_in_cache=True, class_only=False)
Loading TestModule 'files' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/test/files.py (found_in_cache=True, class_only=False)
Loading TestModule 'mathstuff' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/test/mathstuff.py (found_in_cache=True, class_only=False)
variable 'local_tmp' from source: extra vars
variable 'ansible_delegated_vars' from source: unknown
variable 'ansible_connection' from source: host vars for 'localhost'
trying /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/connection
Loading Connection 'local' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/connection/local.py (found_in_cache=True, class_only=False)
trying /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/shell
Loading ShellModule 'sh' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/shell/sh.py (found_in_cache=True, class_only=False)
Loading ShellModule 'sh' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/shell/sh.py (found_in_cache=True, class_only=False)
variable 'ansible_delegated_vars' from source: unknown
Loading ActionModule 'normal' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/action/normal.py (searched paths: /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/action/__pycache__:/home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/action)
variable 'omit' from source: magic vars
starting attempt loop
running the handler
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c 'echo ~ && sleep 0'
<podman-container> ESTABLISH LOCAL CONNECTION FOR USER: sshnaidm
in local.exec_command()
<podman-container> EXEC /bin/sh -c 'echo ~ && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=/home/sshnaidm
, stderr=
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c '( umask 77 && mkdir -p "` echo /home/sshnaidm/.ansible/tmp `"&& mkdir /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 && echo ansible-tmp-1589733889.9242184-4036062-66574237396791="` echo /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 `" ) && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/sshnaidm/.ansible/tmp `"&& mkdir /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 && echo ansible-tmp-1589733889.9242184-4036062-66574237396791="` echo /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 `" ) && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=ansible-tmp-1589733889.9242184-4036062-66574237396791=/home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791
, stderr=
trying /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/modules
ANSIBALLZ: Using lock for lineinfile
ANSIBALLZ: Acquiring lock
ANSIBALLZ: Lock acquired: 139849027519056
ANSIBALLZ: Creating module
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/basic.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/_text.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/_json_compat.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/six/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/compat/selectors.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/text/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/validation.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/_utils.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/compat/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/warnings.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/file.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/parsing/convert_bool.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/_collections_compat.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/text/converters.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/process.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/sys_info.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/text/formatters.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/pycompat24.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/parsing/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/parameters.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/compat/_selectors2.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/collections.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/distro/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/distro/_distro.py
ANSIBALLZ: Writing module into payload
ANSIBALLZ: Writing module
ANSIBALLZ: Renaming module
ANSIBALLZ: Done creating module
variable 'ansible_python_interpreter' from source: group vars, precedence entry 'groups_inventory'
variable 'ansible_facts' from source: unknown
Using module file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/modules/lineinfile.py
transferring module to remote /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py
<podman-container> PUT /home/sshnaidm/.ansible/tmp/ansible-local-4035979hft6_xtg/tmp48y3pu6p TO /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py
done transferring module to remote
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c 'chmod u+x /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c 'chmod u+x /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=, stderr=
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c '/usr/local/bin/python /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c '/usr/local/bin/python /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=127, stdout=, stderr=/bin/sh: /usr/local/bin/python: No such file or directory
done with _execute_module (lineinfile, {'dest': '/tmp/ansible-local-汉语/汉语.txt', 'create': 'true', 'line': '汉语', '_ansible_check_mode': False, '_ansible_no_log': False, '_ansible_debug': True, '_ansible_diff': False, '_ansible_verbosity': 5, '_ansible_version': '2.10.0.dev0', '_ansible_module_name': 'lineinfile', '_ansible_syslog_facility': 'LOG_USER', '_ansible_selinux_special_fs': ['fuse', 'nfs', 'vboxsf', 'ramfs', '9p', 'vfat'], '_ansible_string_conversion_action': 'warn', '_ansible_socket': None, '_ansible_shell_executable': '/bin/sh', '_ansible_keep_remote_files': False, '_ansible_tmpdir': '/home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/', '_ansible_remote_tmp': '~/.ansible/tmp'})
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c 'rm -f -r /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ > /dev/null 2>&1 && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c 'rm -f -r /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ > /dev/null 2>&1 && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=, stderr=
handler run complete
attempt loop complete, returning result
_execute() done
dumping result to json
done dumping result, returning
done running TaskExecutor() for podman-container/TASK: create local file with unicode filename and content [54e1addb-4632-6ef6-342a-00000000000a]
sending task result for task 54e1addb-4632-6ef6-342a-00000000000a
done sending task result for task 54e1addb-4632-6ef6-342a-00000000000a
WORKER PROCESS EXITING
marking podman-container as failed
marking host podman-container failed, current state: HOST STATE: block=2, task=3, rescue=0, always=0, run_state=ITERATING_TASKS, fail_state=FAILED_NONE, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
^ failed state is now: HOST STATE: block=2, task=3, rescue=0, always=0, run_state=ITERATING_COMPLETE, fail_state=FAILED_TASKS, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
getting the next task for host podman-container
host podman-container is done iterating, returning
fatal: [podman-container]: FAILED! => {
"changed": false,
"rc": 127
}
MSG:
The module failed to execute correctly, you probably need to set the interpreter.
See stdout/stderr for the exact error
MODULE_STDERR:
/bin/sh: /usr/local/bin/python: No such file or directory
```
|
https://github.com/ansible/ansible/issues/69574
|
https://github.com/ansible/ansible/pull/69604
|
dc63b365011a583b9e9bcd60d1fad6fb10b553c7
|
de3f7c7739851852dec8ea99a76c353317270b70
| 2020-05-17T17:15:28Z |
python
| 2020-05-22T13:31:34Z |
test/integration/targets/delegate_to/runme.sh
|
#!/usr/bin/env bash
set -eux
platform="$(uname)"
function setup() {
if [[ "${platform}" == "FreeBSD" ]] || [[ "${platform}" == "Darwin" ]]; then
ifconfig lo0
existing=$(ifconfig lo0 | grep '^[[:blank:]]inet 127\.0\.0\. ' || true)
echo "${existing}"
for i in 3 4 254; do
ip="127.0.0.${i}"
if [[ "${existing}" != *"${ip}"* ]]; then
ifconfig lo0 alias "${ip}" up
fi
done
ifconfig lo0
fi
}
function teardown() {
if [[ "${platform}" == "FreeBSD" ]] || [[ "${platform}" == "Darwin" ]]; then
for i in 3 4 254; do
ip="127.0.0.${i}"
if [[ "${existing}" != *"${ip}"* ]]; then
ifconfig lo0 -alias "${ip}"
fi
done
ifconfig lo0
fi
}
setup
trap teardown EXIT
ANSIBLE_SSH_ARGS='-C -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null' \
ANSIBLE_HOST_KEY_CHECKING=false ansible-playbook test_delegate_to.yml -i inventory -v "$@"
# this test is not doing what it says it does, also relies on var that should not be available
#ansible-playbook test_loop_control.yml -v "$@"
ansible-playbook test_delegate_to_loop_randomness.yml -v "$@"
ansible-playbook delegate_and_nolog.yml -i inventory -v "$@"
ansible-playbook delegate_facts_block.yml -i inventory -v "$@"
ansible-playbook test_delegate_to_loop_caching.yml -i inventory -v "$@"
# ensure we are using correct settings when delegating
ANSIBLE_TIMEOUT=3 ansible-playbook delegate_vars_hanldling.yml -i inventory -v "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,574 |
Connection plugins broke in devel branch
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Podman connection plugin started to fail after May 14 2020
Seems like https://github.com/ansible/ansible/commit/2165f9ac40cf212891b11a75bd9b9b2f4f0b8dc3 broke it
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible-playbook 2.10.0.dev0
config file = /home/sshnaidm/.ansible.cfg
configured module search path = ['/home/sshnaidm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/sshnaidm/venvs/ansible-dev/lib/python3.7/site-packages/ansible
executable location = /home/sshnaidm/venvs/ansible-dev/bin/ansible-playbook
python version = 3.7.7 (default, Mar 13 2020, 10:23:39) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
commit hash: 01e7915b0a9778a934a0f0e9e9d110dbef7e31ec
```
##### STEPS TO REPRODUCE
```bash
# install podman
podman run -d --rm --name "podman-container" python:3-alpine sleep 1d
git clone https://github.com/containers/ansible-podman-collections
cd ansible-podman-collections/tests/integration/targets/connection_podman
# install collection
# run twice with 2.9 (it pass) and 2.10dev from devel (it fails)
ANSIBLE_DEBUG=1 ansible-playbook -vvvvv ../connection/test_connection.yml -i test_connection.inventory
-e target_hosts=podman -e action_prefix= -e local_tmp=/tmp/ansible-local -e remote_tmp=/tmp/ansible-remote
```
Testing playbook
```yaml
- hosts: "{{ target_hosts }}"
gather_facts: no
serial: 1
tasks:
### raw with unicode arg and output
- name: raw with unicode arg and output
raw: echo 汉语
register: command
- name: check output of raw with unicode arg and output
assert:
that:
- "'汉语' in command.stdout"
- command is changed # as of 2.2, raw should default to changed: true for consistency w/ shell/command/script modules
### copy local file with unicode filename and content
- name: create local file with unicode filename and content
local_action: lineinfile dest={{ local_tmp }}-汉语/汉语.txt create=true line=汉语
- name: remove remote file with unicode filename and content
action: "{{ action_prefix }}file path={{ remote_tmp }}-汉语/汉语.txt state=absent"
# [skip]
```
Inventory
```
[podman]
podman-container
[podman:vars]
ansible_host=podman-container
ansible_connection=containers.podman.podman
ansible_python_interpreter=/usr/local/bin/python
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
It pass on both 2.9 and 2.10
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Paste of passed result on 2.9: https://pastebin.com/cmbfEH1k
Paste of failed result on 2.10: https://pastebin.com/RER8SBm6
Diff between them, just for better visibility: https://linediff.com/?id=5ec170a6687f4bf1358b4567
!component =lib/ansible/executor/task_executor.py
This is the failed log:
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [create local file with unicode filename and content] ********************************************************************************************************************************************************
task path: /home/sshnaidm/sources/ansible-podman-collections/tests/integration/targets/connection/test_connection.yml:19
sending task start callback
entering _queue_task() for podman-container/lineinfile
Creating lock for lineinfile
worker is 1 (out of 1 available)
exiting _queue_task() for podman-container/lineinfile
running TaskExecutor() for podman-container/TASK: create local file with unicode filename and content
done queuing things up, now waiting for results queue to drain
waiting for pending results...
in run() - task 54e1addb-4632-6ef6-342a-00000000000a
variable 'ansible_connection' from source: group vars, precedence entry 'groups_inventory'
variable 'ansible_search_path' from source: unknown
variable '_ansible_loop_cache' from source: unknown
calling self._execute()
variable 'ansible_delegated_vars' from source: unknown
variable 'ansible_connection' from source: host vars for 'localhost'
no remote address found for delegated host localhost
using its name, so success depends on DNS resolution
variable 'omit' from source: magic vars
variable 'omit' from source: magic vars
variable 'omit' from source: magic vars
Loading FilterModule 'core' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/core.py (found_in_cache=True, class_only=False)
Loading FilterModule 'mathstuff' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/mathstuff.py (found_in_cache=True, class_only=False)
Loading FilterModule 'urls' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/urls.py (found_in_cache=True, class_only=False)
Loading FilterModule 'urlsplit' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/urlsplit.py (found_in_cache=True, class_only=False)
Loading TestModule 'core' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/test/core.py (found_in_cache=True, class_only=False)
Loading TestModule 'files' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/test/files.py (found_in_cache=True, class_only=False)
Loading TestModule 'mathstuff' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/test/mathstuff.py (found_in_cache=True, class_only=False)
variable 'local_tmp' from source: extra vars
variable 'ansible_delegated_vars' from source: unknown
variable 'ansible_connection' from source: host vars for 'localhost'
trying /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/connection
Loading Connection 'local' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/connection/local.py (found_in_cache=True, class_only=False)
trying /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/shell
Loading ShellModule 'sh' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/shell/sh.py (found_in_cache=True, class_only=False)
Loading ShellModule 'sh' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/shell/sh.py (found_in_cache=True, class_only=False)
variable 'ansible_delegated_vars' from source: unknown
Loading ActionModule 'normal' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/action/normal.py (searched paths: /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/action/__pycache__:/home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/action)
variable 'omit' from source: magic vars
starting attempt loop
running the handler
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c 'echo ~ && sleep 0'
<podman-container> ESTABLISH LOCAL CONNECTION FOR USER: sshnaidm
in local.exec_command()
<podman-container> EXEC /bin/sh -c 'echo ~ && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=/home/sshnaidm
, stderr=
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c '( umask 77 && mkdir -p "` echo /home/sshnaidm/.ansible/tmp `"&& mkdir /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 && echo ansible-tmp-1589733889.9242184-4036062-66574237396791="` echo /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 `" ) && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/sshnaidm/.ansible/tmp `"&& mkdir /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 && echo ansible-tmp-1589733889.9242184-4036062-66574237396791="` echo /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 `" ) && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=ansible-tmp-1589733889.9242184-4036062-66574237396791=/home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791
, stderr=
trying /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/modules
ANSIBALLZ: Using lock for lineinfile
ANSIBALLZ: Acquiring lock
ANSIBALLZ: Lock acquired: 139849027519056
ANSIBALLZ: Creating module
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/basic.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/_text.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/_json_compat.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/six/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/compat/selectors.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/text/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/validation.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/_utils.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/compat/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/warnings.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/file.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/parsing/convert_bool.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/_collections_compat.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/text/converters.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/process.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/sys_info.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/text/formatters.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/pycompat24.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/parsing/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/parameters.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/compat/_selectors2.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/collections.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/distro/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/distro/_distro.py
ANSIBALLZ: Writing module into payload
ANSIBALLZ: Writing module
ANSIBALLZ: Renaming module
ANSIBALLZ: Done creating module
variable 'ansible_python_interpreter' from source: group vars, precedence entry 'groups_inventory'
variable 'ansible_facts' from source: unknown
Using module file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/modules/lineinfile.py
transferring module to remote /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py
<podman-container> PUT /home/sshnaidm/.ansible/tmp/ansible-local-4035979hft6_xtg/tmp48y3pu6p TO /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py
done transferring module to remote
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c 'chmod u+x /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c 'chmod u+x /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=, stderr=
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c '/usr/local/bin/python /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c '/usr/local/bin/python /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=127, stdout=, stderr=/bin/sh: /usr/local/bin/python: No such file or directory
done with _execute_module (lineinfile, {'dest': '/tmp/ansible-local-汉语/汉语.txt', 'create': 'true', 'line': '汉语', '_ansible_check_mode': False, '_ansible_no_log': False, '_ansible_debug': True, '_ansible_diff': False, '_ansible_verbosity': 5, '_ansible_version': '2.10.0.dev0', '_ansible_module_name': 'lineinfile', '_ansible_syslog_facility': 'LOG_USER', '_ansible_selinux_special_fs': ['fuse', 'nfs', 'vboxsf', 'ramfs', '9p', 'vfat'], '_ansible_string_conversion_action': 'warn', '_ansible_socket': None, '_ansible_shell_executable': '/bin/sh', '_ansible_keep_remote_files': False, '_ansible_tmpdir': '/home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/', '_ansible_remote_tmp': '~/.ansible/tmp'})
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c 'rm -f -r /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ > /dev/null 2>&1 && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c 'rm -f -r /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ > /dev/null 2>&1 && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=, stderr=
handler run complete
attempt loop complete, returning result
_execute() done
dumping result to json
done dumping result, returning
done running TaskExecutor() for podman-container/TASK: create local file with unicode filename and content [54e1addb-4632-6ef6-342a-00000000000a]
sending task result for task 54e1addb-4632-6ef6-342a-00000000000a
done sending task result for task 54e1addb-4632-6ef6-342a-00000000000a
WORKER PROCESS EXITING
marking podman-container as failed
marking host podman-container failed, current state: HOST STATE: block=2, task=3, rescue=0, always=0, run_state=ITERATING_TASKS, fail_state=FAILED_NONE, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
^ failed state is now: HOST STATE: block=2, task=3, rescue=0, always=0, run_state=ITERATING_COMPLETE, fail_state=FAILED_TASKS, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
getting the next task for host podman-container
host podman-container is done iterating, returning
fatal: [podman-container]: FAILED! => {
"changed": false,
"rc": 127
}
MSG:
The module failed to execute correctly, you probably need to set the interpreter.
See stdout/stderr for the exact error
MODULE_STDERR:
/bin/sh: /usr/local/bin/python: No such file or directory
```
|
https://github.com/ansible/ansible/issues/69574
|
https://github.com/ansible/ansible/pull/69604
|
dc63b365011a583b9e9bcd60d1fad6fb10b553c7
|
de3f7c7739851852dec8ea99a76c353317270b70
| 2020-05-17T17:15:28Z |
python
| 2020-05-22T13:31:34Z |
test/integration/targets/delegate_to/verify_interpreter.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,574 |
Connection plugins broke in devel branch
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Podman connection plugin started to fail after May 14 2020
Seems like https://github.com/ansible/ansible/commit/2165f9ac40cf212891b11a75bd9b9b2f4f0b8dc3 broke it
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible-playbook 2.10.0.dev0
config file = /home/sshnaidm/.ansible.cfg
configured module search path = ['/home/sshnaidm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/sshnaidm/venvs/ansible-dev/lib/python3.7/site-packages/ansible
executable location = /home/sshnaidm/venvs/ansible-dev/bin/ansible-playbook
python version = 3.7.7 (default, Mar 13 2020, 10:23:39) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
commit hash: 01e7915b0a9778a934a0f0e9e9d110dbef7e31ec
```
##### STEPS TO REPRODUCE
```bash
# install podman
podman run -d --rm --name "podman-container" python:3-alpine sleep 1d
git clone https://github.com/containers/ansible-podman-collections
cd ansible-podman-collections/tests/integration/targets/connection_podman
# install collection
# run twice with 2.9 (it pass) and 2.10dev from devel (it fails)
ANSIBLE_DEBUG=1 ansible-playbook -vvvvv ../connection/test_connection.yml -i test_connection.inventory
-e target_hosts=podman -e action_prefix= -e local_tmp=/tmp/ansible-local -e remote_tmp=/tmp/ansible-remote
```
Testing playbook
```yaml
- hosts: "{{ target_hosts }}"
gather_facts: no
serial: 1
tasks:
### raw with unicode arg and output
- name: raw with unicode arg and output
raw: echo 汉语
register: command
- name: check output of raw with unicode arg and output
assert:
that:
- "'汉语' in command.stdout"
- command is changed # as of 2.2, raw should default to changed: true for consistency w/ shell/command/script modules
### copy local file with unicode filename and content
- name: create local file with unicode filename and content
local_action: lineinfile dest={{ local_tmp }}-汉语/汉语.txt create=true line=汉语
- name: remove remote file with unicode filename and content
action: "{{ action_prefix }}file path={{ remote_tmp }}-汉语/汉语.txt state=absent"
# [skip]
```
Inventory
```
[podman]
podman-container
[podman:vars]
ansible_host=podman-container
ansible_connection=containers.podman.podman
ansible_python_interpreter=/usr/local/bin/python
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
It pass on both 2.9 and 2.10
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Paste of passed result on 2.9: https://pastebin.com/cmbfEH1k
Paste of failed result on 2.10: https://pastebin.com/RER8SBm6
Diff between them, just for better visibility: https://linediff.com/?id=5ec170a6687f4bf1358b4567
!component =lib/ansible/executor/task_executor.py
This is the failed log:
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [create local file with unicode filename and content] ********************************************************************************************************************************************************
task path: /home/sshnaidm/sources/ansible-podman-collections/tests/integration/targets/connection/test_connection.yml:19
sending task start callback
entering _queue_task() for podman-container/lineinfile
Creating lock for lineinfile
worker is 1 (out of 1 available)
exiting _queue_task() for podman-container/lineinfile
running TaskExecutor() for podman-container/TASK: create local file with unicode filename and content
done queuing things up, now waiting for results queue to drain
waiting for pending results...
in run() - task 54e1addb-4632-6ef6-342a-00000000000a
variable 'ansible_connection' from source: group vars, precedence entry 'groups_inventory'
variable 'ansible_search_path' from source: unknown
variable '_ansible_loop_cache' from source: unknown
calling self._execute()
variable 'ansible_delegated_vars' from source: unknown
variable 'ansible_connection' from source: host vars for 'localhost'
no remote address found for delegated host localhost
using its name, so success depends on DNS resolution
variable 'omit' from source: magic vars
variable 'omit' from source: magic vars
variable 'omit' from source: magic vars
Loading FilterModule 'core' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/core.py (found_in_cache=True, class_only=False)
Loading FilterModule 'mathstuff' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/mathstuff.py (found_in_cache=True, class_only=False)
Loading FilterModule 'urls' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/urls.py (found_in_cache=True, class_only=False)
Loading FilterModule 'urlsplit' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/filter/urlsplit.py (found_in_cache=True, class_only=False)
Loading TestModule 'core' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/test/core.py (found_in_cache=True, class_only=False)
Loading TestModule 'files' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/test/files.py (found_in_cache=True, class_only=False)
Loading TestModule 'mathstuff' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/test/mathstuff.py (found_in_cache=True, class_only=False)
variable 'local_tmp' from source: extra vars
variable 'ansible_delegated_vars' from source: unknown
variable 'ansible_connection' from source: host vars for 'localhost'
trying /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/connection
Loading Connection 'local' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/connection/local.py (found_in_cache=True, class_only=False)
trying /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/shell
Loading ShellModule 'sh' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/shell/sh.py (found_in_cache=True, class_only=False)
Loading ShellModule 'sh' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/shell/sh.py (found_in_cache=True, class_only=False)
variable 'ansible_delegated_vars' from source: unknown
Loading ActionModule 'normal' from /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/action/normal.py (searched paths: /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/action/__pycache__:/home/sshnaidm/.local/lib/python3.7/site-packages/ansible/plugins/action)
variable 'omit' from source: magic vars
starting attempt loop
running the handler
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c 'echo ~ && sleep 0'
<podman-container> ESTABLISH LOCAL CONNECTION FOR USER: sshnaidm
in local.exec_command()
<podman-container> EXEC /bin/sh -c 'echo ~ && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=/home/sshnaidm
, stderr=
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c '( umask 77 && mkdir -p "` echo /home/sshnaidm/.ansible/tmp `"&& mkdir /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 && echo ansible-tmp-1589733889.9242184-4036062-66574237396791="` echo /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 `" ) && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/sshnaidm/.ansible/tmp `"&& mkdir /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 && echo ansible-tmp-1589733889.9242184-4036062-66574237396791="` echo /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791 `" ) && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=ansible-tmp-1589733889.9242184-4036062-66574237396791=/home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791
, stderr=
trying /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/modules
ANSIBALLZ: Using lock for lineinfile
ANSIBALLZ: Acquiring lock
ANSIBALLZ: Lock acquired: 139849027519056
ANSIBALLZ: Creating module
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/basic.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/_text.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/_json_compat.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/six/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/compat/selectors.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/text/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/validation.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/_utils.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/compat/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/warnings.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/file.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/parsing/convert_bool.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/_collections_compat.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/text/converters.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/process.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/sys_info.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/text/formatters.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/pycompat24.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/parsing/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/parameters.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/compat/_selectors2.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/common/collections.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/distro/__init__.py
Using module_utils file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/module_utils/distro/_distro.py
ANSIBALLZ: Writing module into payload
ANSIBALLZ: Writing module
ANSIBALLZ: Renaming module
ANSIBALLZ: Done creating module
variable 'ansible_python_interpreter' from source: group vars, precedence entry 'groups_inventory'
variable 'ansible_facts' from source: unknown
Using module file /home/sshnaidm/.local/lib/python3.7/site-packages/ansible/modules/lineinfile.py
transferring module to remote /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py
<podman-container> PUT /home/sshnaidm/.ansible/tmp/ansible-local-4035979hft6_xtg/tmp48y3pu6p TO /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py
done transferring module to remote
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c 'chmod u+x /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c 'chmod u+x /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=, stderr=
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c '/usr/local/bin/python /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c '/usr/local/bin/python /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/AnsiballZ_lineinfile.py && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=127, stdout=, stderr=/bin/sh: /usr/local/bin/python: No such file or directory
done with _execute_module (lineinfile, {'dest': '/tmp/ansible-local-汉语/汉语.txt', 'create': 'true', 'line': '汉语', '_ansible_check_mode': False, '_ansible_no_log': False, '_ansible_debug': True, '_ansible_diff': False, '_ansible_verbosity': 5, '_ansible_version': '2.10.0.dev0', '_ansible_module_name': 'lineinfile', '_ansible_syslog_facility': 'LOG_USER', '_ansible_selinux_special_fs': ['fuse', 'nfs', 'vboxsf', 'ramfs', '9p', 'vfat'], '_ansible_string_conversion_action': 'warn', '_ansible_socket': None, '_ansible_shell_executable': '/bin/sh', '_ansible_keep_remote_files': False, '_ansible_tmpdir': '/home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/', '_ansible_remote_tmp': '~/.ansible/tmp'})
_low_level_execute_command(): starting
_low_level_execute_command(): executing: /bin/sh -c 'rm -f -r /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ > /dev/null 2>&1 && sleep 0'
in local.exec_command()
<podman-container> EXEC /bin/sh -c 'rm -f -r /home/sshnaidm/.ansible/tmp/ansible-tmp-1589733889.9242184-4036062-66574237396791/ > /dev/null 2>&1 && sleep 0'
opening command with Popen()
done running command with Popen()
getting output with communicate()
done communicating
done with local.exec_command()
_low_level_execute_command() done: rc=0, stdout=, stderr=
handler run complete
attempt loop complete, returning result
_execute() done
dumping result to json
done dumping result, returning
done running TaskExecutor() for podman-container/TASK: create local file with unicode filename and content [54e1addb-4632-6ef6-342a-00000000000a]
sending task result for task 54e1addb-4632-6ef6-342a-00000000000a
done sending task result for task 54e1addb-4632-6ef6-342a-00000000000a
WORKER PROCESS EXITING
marking podman-container as failed
marking host podman-container failed, current state: HOST STATE: block=2, task=3, rescue=0, always=0, run_state=ITERATING_TASKS, fail_state=FAILED_NONE, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
^ failed state is now: HOST STATE: block=2, task=3, rescue=0, always=0, run_state=ITERATING_COMPLETE, fail_state=FAILED_TASKS, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
getting the next task for host podman-container
host podman-container is done iterating, returning
fatal: [podman-container]: FAILED! => {
"changed": false,
"rc": 127
}
MSG:
The module failed to execute correctly, you probably need to set the interpreter.
See stdout/stderr for the exact error
MODULE_STDERR:
/bin/sh: /usr/local/bin/python: No such file or directory
```
|
https://github.com/ansible/ansible/issues/69574
|
https://github.com/ansible/ansible/pull/69604
|
dc63b365011a583b9e9bcd60d1fad6fb10b553c7
|
de3f7c7739851852dec8ea99a76c353317270b70
| 2020-05-17T17:15:28Z |
python
| 2020-05-22T13:31:34Z |
test/units/plugins/action/test_action.py
|
# -*- coding: utf-8 -*-
# (c) 2015, Florian Apolloner <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import re
from ansible import constants as C
from units.compat import unittest
from units.compat.mock import patch, MagicMock, mock_open
from ansible.errors import AnsibleError
from ansible.module_utils.six import text_type
from ansible.module_utils.six.moves import shlex_quote, builtins
from ansible.module_utils._text import to_bytes
from ansible.playbook.play_context import PlayContext
from ansible.plugins.action import ActionBase
from ansible.template import Templar
from ansible.vars.clean import clean_facts
from units.mock.loader import DictDataLoader
python_module_replacers = br"""
#!/usr/bin/python
#ANSIBLE_VERSION = "<<ANSIBLE_VERSION>>"
#MODULE_COMPLEX_ARGS = "<<INCLUDE_ANSIBLE_MODULE_COMPLEX_ARGS>>"
#SELINUX_SPECIAL_FS="<<SELINUX_SPECIAL_FILESYSTEMS>>"
test = u'Toshio \u304f\u3089\u3068\u307f'
from ansible.module_utils.basic import *
"""
powershell_module_replacers = b"""
WINDOWS_ARGS = "<<INCLUDE_ANSIBLE_MODULE_JSON_ARGS>>"
# POWERSHELL_COMMON
"""
def _action_base():
fake_loader = DictDataLoader({
})
mock_module_loader = MagicMock()
mock_shared_loader_obj = MagicMock()
mock_shared_loader_obj.module_loader = mock_module_loader
mock_connection_loader = MagicMock()
mock_shared_loader_obj.connection_loader = mock_connection_loader
mock_connection = MagicMock()
play_context = MagicMock()
action_base = DerivedActionBase(task=None,
connection=mock_connection,
play_context=play_context,
loader=fake_loader,
templar=None,
shared_loader_obj=mock_shared_loader_obj)
return action_base
class DerivedActionBase(ActionBase):
TRANSFERS_FILES = False
def run(self, tmp=None, task_vars=None):
# We're not testing the plugin run() method, just the helper
# methods ActionBase defines
return super(DerivedActionBase, self).run(tmp=tmp, task_vars=task_vars)
class TestActionBase(unittest.TestCase):
def test_action_base_run(self):
mock_task = MagicMock()
mock_task.action = "foo"
mock_task.args = dict(a=1, b=2, c=3)
mock_connection = MagicMock()
play_context = PlayContext()
mock_task.async_val = None
action_base = DerivedActionBase(mock_task, mock_connection, play_context, None, None, None)
results = action_base.run()
self.assertEqual(results, dict())
mock_task.async_val = 0
action_base = DerivedActionBase(mock_task, mock_connection, play_context, None, None, None)
results = action_base.run()
self.assertEqual(results, {})
def test_action_base__configure_module(self):
fake_loader = DictDataLoader({
})
# create our fake task
mock_task = MagicMock()
mock_task.action = "copy"
mock_task.async_val = 0
# create a mock connection, so we don't actually try and connect to things
mock_connection = MagicMock()
# create a mock shared loader object
def mock_find_plugin(name, options, collection_list=None):
if name == 'badmodule':
return None
elif '.ps1' in options:
return '/fake/path/to/%s.ps1' % name
else:
return '/fake/path/to/%s' % name
mock_module_loader = MagicMock()
mock_module_loader.find_plugin.side_effect = mock_find_plugin
mock_shared_obj_loader = MagicMock()
mock_shared_obj_loader.module_loader = mock_module_loader
# we're using a real play context here
play_context = PlayContext()
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=fake_loader,
templar=Templar(loader=fake_loader),
shared_loader_obj=mock_shared_obj_loader,
)
# test python module formatting
with patch.object(builtins, 'open', mock_open(read_data=to_bytes(python_module_replacers.strip(), encoding='utf-8'))):
with patch.object(os, 'rename'):
mock_task.args = dict(a=1, foo='fö〩')
mock_connection.module_implementation_preferences = ('',)
(style, shebang, data, path) = action_base._configure_module(mock_task.action, mock_task.args,
task_vars=dict(ansible_python_interpreter='/usr/bin/python'))
self.assertEqual(style, "new")
self.assertEqual(shebang, u"#!/usr/bin/python")
# test module not found
self.assertRaises(AnsibleError, action_base._configure_module, 'badmodule', mock_task.args)
# test powershell module formatting
with patch.object(builtins, 'open', mock_open(read_data=to_bytes(powershell_module_replacers.strip(), encoding='utf-8'))):
mock_task.action = 'win_copy'
mock_task.args = dict(b=2)
mock_connection.module_implementation_preferences = ('.ps1',)
(style, shebang, data, path) = action_base._configure_module('stat', mock_task.args)
self.assertEqual(style, "new")
self.assertEqual(shebang, u'#!powershell')
# test module not found
self.assertRaises(AnsibleError, action_base._configure_module, 'badmodule', mock_task.args)
def test_action_base__compute_environment_string(self):
fake_loader = DictDataLoader({
})
# create our fake task
mock_task = MagicMock()
mock_task.action = "copy"
mock_task.args = dict(a=1)
# create a mock connection, so we don't actually try and connect to things
def env_prefix(**args):
return ' '.join(['%s=%s' % (k, shlex_quote(text_type(v))) for k, v in args.items()])
mock_connection = MagicMock()
mock_connection._shell.env_prefix.side_effect = env_prefix
# we're using a real play context here
play_context = PlayContext()
# and we're using a real templar here too
templar = Templar(loader=fake_loader)
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=fake_loader,
templar=templar,
shared_loader_obj=None,
)
# test standard environment setup
mock_task.environment = [dict(FOO='foo'), None]
env_string = action_base._compute_environment_string()
self.assertEqual(env_string, "FOO=foo")
# test where environment is not a list
mock_task.environment = dict(FOO='foo')
env_string = action_base._compute_environment_string()
self.assertEqual(env_string, "FOO=foo")
# test environment with a variable in it
templar.available_variables = dict(the_var='bar')
mock_task.environment = [dict(FOO='{{the_var}}')]
env_string = action_base._compute_environment_string()
self.assertEqual(env_string, "FOO=bar")
# test with a bad environment set
mock_task.environment = dict(FOO='foo')
mock_task.environment = ['hi there']
self.assertRaises(AnsibleError, action_base._compute_environment_string)
def test_action_base__early_needs_tmp_path(self):
# create our fake task
mock_task = MagicMock()
# create a mock connection, so we don't actually try and connect to things
mock_connection = MagicMock()
# we're using a real play context here
play_context = PlayContext()
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=None,
templar=None,
shared_loader_obj=None,
)
self.assertFalse(action_base._early_needs_tmp_path())
action_base.TRANSFERS_FILES = True
self.assertTrue(action_base._early_needs_tmp_path())
def test_action_base__make_tmp_path(self):
# create our fake task
mock_task = MagicMock()
def get_shell_opt(opt):
ret = None
if opt == 'admin_users':
ret = ['root', 'toor', 'Administrator']
elif opt == 'remote_tmp':
ret = '~/.ansible/tmp'
return ret
# create a mock connection, so we don't actually try and connect to things
mock_connection = MagicMock()
mock_connection.transport = 'ssh'
mock_connection._shell.mkdtemp.return_value = 'mkdir command'
mock_connection._shell.join_path.side_effect = os.path.join
mock_connection._shell.get_option = get_shell_opt
mock_connection._shell.HOMES_RE = re.compile(r'(\'|\")?(~|\$HOME)(.*)')
# we're using a real play context here
play_context = PlayContext()
play_context.become = True
play_context.become_user = 'foo'
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=None,
templar=None,
shared_loader_obj=None,
)
action_base._low_level_execute_command = MagicMock()
action_base._low_level_execute_command.return_value = dict(rc=0, stdout='/some/path')
self.assertEqual(action_base._make_tmp_path('root'), '/some/path/')
# empty path fails
action_base._low_level_execute_command.return_value = dict(rc=0, stdout='')
self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root')
# authentication failure
action_base._low_level_execute_command.return_value = dict(rc=5, stdout='')
self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root')
# ssh error
action_base._low_level_execute_command.return_value = dict(rc=255, stdout='', stderr='')
self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root')
play_context.verbosity = 5
self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root')
# general error
action_base._low_level_execute_command.return_value = dict(rc=1, stdout='some stuff here', stderr='')
self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root')
action_base._low_level_execute_command.return_value = dict(rc=1, stdout='some stuff here', stderr='No space left on device')
self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root')
def test_action_base__remove_tmp_path(self):
# create our fake task
mock_task = MagicMock()
# create a mock connection, so we don't actually try and connect to things
mock_connection = MagicMock()
mock_connection._shell.remove.return_value = 'rm some stuff'
# we're using a real play context here
play_context = PlayContext()
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=None,
templar=None,
shared_loader_obj=None,
)
action_base._low_level_execute_command = MagicMock()
# these don't really return anything or raise errors, so
# we're pretty much calling these for coverage right now
action_base._remove_tmp_path('/bad/path/dont/remove')
action_base._remove_tmp_path('/good/path/to/ansible-tmp-thing')
@patch('os.unlink')
@patch('os.fdopen')
@patch('tempfile.mkstemp')
def test_action_base__transfer_data(self, mock_mkstemp, mock_fdopen, mock_unlink):
# create our fake task
mock_task = MagicMock()
# create a mock connection, so we don't actually try and connect to things
mock_connection = MagicMock()
mock_connection.put_file.return_value = None
# we're using a real play context here
play_context = PlayContext()
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=None,
templar=None,
shared_loader_obj=None,
)
mock_afd = MagicMock()
mock_afile = MagicMock()
mock_mkstemp.return_value = (mock_afd, mock_afile)
mock_unlink.return_value = None
mock_afo = MagicMock()
mock_afo.write.return_value = None
mock_afo.flush.return_value = None
mock_afo.close.return_value = None
mock_fdopen.return_value = mock_afo
self.assertEqual(action_base._transfer_data('/path/to/remote/file', 'some data'), '/path/to/remote/file')
self.assertEqual(action_base._transfer_data('/path/to/remote/file', 'some mixed data: fö〩'), '/path/to/remote/file')
self.assertEqual(action_base._transfer_data('/path/to/remote/file', dict(some_key='some value')), '/path/to/remote/file')
self.assertEqual(action_base._transfer_data('/path/to/remote/file', dict(some_key='fö〩')), '/path/to/remote/file')
mock_afo.write.side_effect = Exception()
self.assertRaises(AnsibleError, action_base._transfer_data, '/path/to/remote/file', '')
def test_action_base__execute_remote_stat(self):
# create our fake task
mock_task = MagicMock()
# create a mock connection, so we don't actually try and connect to things
mock_connection = MagicMock()
# we're using a real play context here
play_context = PlayContext()
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=None,
templar=None,
shared_loader_obj=None,
)
action_base._execute_module = MagicMock()
# test normal case
action_base._execute_module.return_value = dict(stat=dict(checksum='1111111111111111111111111111111111', exists=True))
res = action_base._execute_remote_stat(path='/path/to/file', all_vars=dict(), follow=False)
self.assertEqual(res['checksum'], '1111111111111111111111111111111111')
# test does not exist
action_base._execute_module.return_value = dict(stat=dict(exists=False))
res = action_base._execute_remote_stat(path='/path/to/file', all_vars=dict(), follow=False)
self.assertFalse(res['exists'])
self.assertEqual(res['checksum'], '1')
# test no checksum in result from _execute_module
action_base._execute_module.return_value = dict(stat=dict(exists=True))
res = action_base._execute_remote_stat(path='/path/to/file', all_vars=dict(), follow=False)
self.assertTrue(res['exists'])
self.assertEqual(res['checksum'], '')
# test stat call failed
action_base._execute_module.return_value = dict(failed=True, msg="because I said so")
self.assertRaises(AnsibleError, action_base._execute_remote_stat, path='/path/to/file', all_vars=dict(), follow=False)
def test_action_base__execute_module(self):
# create our fake task
mock_task = MagicMock()
mock_task.action = 'copy'
mock_task.args = dict(a=1, b=2, c=3)
# create a mock connection, so we don't actually try and connect to things
def build_module_command(env_string, shebang, cmd, arg_path=None):
to_run = [env_string, cmd]
if arg_path:
to_run.append(arg_path)
return " ".join(to_run)
def get_option(option):
return {'admin_users': ['root', 'toor']}.get(option)
mock_connection = MagicMock()
mock_connection.build_module_command.side_effect = build_module_command
mock_connection.socket_path = None
mock_connection._shell.get_remote_filename.return_value = 'copy.py'
mock_connection._shell.join_path.side_effect = os.path.join
mock_connection._shell.tmpdir = '/var/tmp/mytempdir'
mock_connection._shell.get_option = get_option
# we're using a real play context here
play_context = PlayContext()
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=None,
templar=None,
shared_loader_obj=None,
)
# fake a lot of methods as we test those elsewhere
action_base._configure_module = MagicMock()
action_base._supports_check_mode = MagicMock()
action_base._is_pipelining_enabled = MagicMock()
action_base._make_tmp_path = MagicMock()
action_base._transfer_data = MagicMock()
action_base._compute_environment_string = MagicMock()
action_base._low_level_execute_command = MagicMock()
action_base._fixup_perms2 = MagicMock()
action_base._configure_module.return_value = ('new', '#!/usr/bin/python', 'this is the module data', 'path')
action_base._is_pipelining_enabled.return_value = False
action_base._compute_environment_string.return_value = ''
action_base._connection.has_pipelining = False
action_base._make_tmp_path.return_value = '/the/tmp/path'
action_base._low_level_execute_command.return_value = dict(stdout='{"rc": 0, "stdout": "ok"}')
self.assertEqual(action_base._execute_module(module_name=None, module_args=None), dict(_ansible_parsed=True, rc=0, stdout="ok", stdout_lines=['ok']))
self.assertEqual(
action_base._execute_module(
module_name='foo',
module_args=dict(z=9, y=8, x=7),
task_vars=dict(a=1)
),
dict(
_ansible_parsed=True,
rc=0,
stdout="ok",
stdout_lines=['ok'],
)
)
# test with needing/removing a remote tmp path
action_base._configure_module.return_value = ('old', '#!/usr/bin/python', 'this is the module data', 'path')
action_base._is_pipelining_enabled.return_value = False
action_base._make_tmp_path.return_value = '/the/tmp/path'
self.assertEqual(action_base._execute_module(), dict(_ansible_parsed=True, rc=0, stdout="ok", stdout_lines=['ok']))
action_base._configure_module.return_value = ('non_native_want_json', '#!/usr/bin/python', 'this is the module data', 'path')
self.assertEqual(action_base._execute_module(), dict(_ansible_parsed=True, rc=0, stdout="ok", stdout_lines=['ok']))
play_context.become = True
play_context.become_user = 'foo'
self.assertEqual(action_base._execute_module(), dict(_ansible_parsed=True, rc=0, stdout="ok", stdout_lines=['ok']))
# test an invalid shebang return
action_base._configure_module.return_value = ('new', '', 'this is the module data', 'path')
action_base._is_pipelining_enabled.return_value = False
action_base._make_tmp_path.return_value = '/the/tmp/path'
self.assertRaises(AnsibleError, action_base._execute_module)
# test with check mode enabled, once with support for check
# mode and once with support disabled to raise an error
play_context.check_mode = True
action_base._configure_module.return_value = ('new', '#!/usr/bin/python', 'this is the module data', 'path')
self.assertEqual(action_base._execute_module(), dict(_ansible_parsed=True, rc=0, stdout="ok", stdout_lines=['ok']))
action_base._supports_check_mode = False
self.assertRaises(AnsibleError, action_base._execute_module)
def test_action_base_sudo_only_if_user_differs(self):
fake_loader = MagicMock()
fake_loader.get_basedir.return_value = os.getcwd()
play_context = PlayContext()
action_base = DerivedActionBase(None, None, play_context, fake_loader, None, None)
action_base.get_become_option = MagicMock(return_value='root')
action_base._get_remote_user = MagicMock(return_value='root')
action_base._connection = MagicMock(exec_command=MagicMock(return_value=(0, '', '')))
action_base._connection._shell = shell = MagicMock(append_command=MagicMock(return_value=('JOINED CMD')))
action_base._connection.become = become = MagicMock()
become.build_become_command.return_value = 'foo'
action_base._low_level_execute_command('ECHO', sudoable=True)
become.build_become_command.assert_not_called()
action_base._get_remote_user.return_value = 'apo'
action_base._low_level_execute_command('ECHO', sudoable=True, executable='/bin/csh')
become.build_become_command.assert_called_once_with("ECHO", shell)
become.build_become_command.reset_mock()
with patch.object(C, 'BECOME_ALLOW_SAME_USER', new=True):
action_base._get_remote_user.return_value = 'root'
action_base._low_level_execute_command('ECHO SAME', sudoable=True)
become.build_become_command.assert_called_once_with("ECHO SAME", shell)
def test__remote_expand_user_relative_pathing(self):
action_base = _action_base()
action_base._play_context.remote_addr = 'bar'
action_base._low_level_execute_command = MagicMock(return_value={'stdout': b'../home/user'})
action_base._connection._shell.join_path.return_value = '../home/user/foo'
with self.assertRaises(AnsibleError) as cm:
action_base._remote_expand_user('~/foo')
self.assertEqual(
cm.exception.message,
"'bar' returned an invalid relative home directory path containing '..'"
)
class TestActionBaseCleanReturnedData(unittest.TestCase):
def test(self):
fake_loader = DictDataLoader({
})
mock_module_loader = MagicMock()
mock_shared_loader_obj = MagicMock()
mock_shared_loader_obj.module_loader = mock_module_loader
connection_loader_paths = ['/tmp/asdfadf', '/usr/lib64/whatever',
'dfadfasf',
'foo.py',
'.*',
# FIXME: a path with parans breaks the regex
# '(.*)',
'/path/to/ansible/lib/ansible/plugins/connection/custom_connection.py',
'/path/to/ansible/lib/ansible/plugins/connection/ssh.py']
def fake_all(path_only=None):
for path in connection_loader_paths:
yield path
mock_connection_loader = MagicMock()
mock_connection_loader.all = fake_all
mock_shared_loader_obj.connection_loader = mock_connection_loader
mock_connection = MagicMock()
# mock_connection._shell.env_prefix.side_effect = env_prefix
# action_base = DerivedActionBase(mock_task, mock_connection, play_context, None, None, None)
action_base = DerivedActionBase(task=None,
connection=mock_connection,
play_context=None,
loader=fake_loader,
templar=None,
shared_loader_obj=mock_shared_loader_obj)
data = {'ansible_playbook_python': '/usr/bin/python',
# 'ansible_rsync_path': '/usr/bin/rsync',
'ansible_python_interpreter': '/usr/bin/python',
'ansible_ssh_some_var': 'whatever',
'ansible_ssh_host_key_somehost': 'some key here',
'some_other_var': 'foo bar'}
data = clean_facts(data)
self.assertNotIn('ansible_playbook_python', data)
self.assertNotIn('ansible_python_interpreter', data)
self.assertIn('ansible_ssh_host_key_somehost', data)
self.assertIn('some_other_var', data)
class TestActionBaseParseReturnedData(unittest.TestCase):
def test_fail_no_json(self):
action_base = _action_base()
rc = 0
stdout = 'foo\nbar\n'
err = 'oopsy'
returned_data = {'rc': rc,
'stdout': stdout,
'stdout_lines': stdout.splitlines(),
'stderr': err}
res = action_base._parse_returned_data(returned_data)
self.assertFalse(res['_ansible_parsed'])
self.assertTrue(res['failed'])
self.assertEqual(res['module_stderr'], err)
def test_json_empty(self):
action_base = _action_base()
rc = 0
stdout = '{}\n'
err = ''
returned_data = {'rc': rc,
'stdout': stdout,
'stdout_lines': stdout.splitlines(),
'stderr': err}
res = action_base._parse_returned_data(returned_data)
del res['_ansible_parsed'] # we always have _ansible_parsed
self.assertEqual(len(res), 0)
self.assertFalse(res)
def test_json_facts(self):
action_base = _action_base()
rc = 0
stdout = '{"ansible_facts": {"foo": "bar", "ansible_blip": "blip_value"}}\n'
err = ''
returned_data = {'rc': rc,
'stdout': stdout,
'stdout_lines': stdout.splitlines(),
'stderr': err}
res = action_base._parse_returned_data(returned_data)
self.assertTrue(res['ansible_facts'])
self.assertIn('ansible_blip', res['ansible_facts'])
# TODO: Should this be an AnsibleUnsafe?
# self.assertIsInstance(res['ansible_facts'], AnsibleUnsafe)
def test_json_facts_add_host(self):
action_base = _action_base()
rc = 0
stdout = '''{"ansible_facts": {"foo": "bar", "ansible_blip": "blip_value"},
"add_host": {"host_vars": {"some_key": ["whatever the add_host object is"]}
}
}\n'''
err = ''
returned_data = {'rc': rc,
'stdout': stdout,
'stdout_lines': stdout.splitlines(),
'stderr': err}
res = action_base._parse_returned_data(returned_data)
self.assertTrue(res['ansible_facts'])
self.assertIn('ansible_blip', res['ansible_facts'])
self.assertIn('add_host', res)
# TODO: Should this be an AnsibleUnsafe?
# self.assertIsInstance(res['ansible_facts'], AnsibleUnsafe)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,634 |
-K option no longer works for become_password
|
##### SUMMARY
After 2165f9ac40cf212891b11a75bd9b9b2f4f0b8dc3, `-K` no longer works for become password.
#69629 has a fix to send it to the plugin and get things working again, but @bcoca suggested we should try to do something other than threading it through TQM -> play_context.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
become, task_executor
##### ANSIBLE VERSION
devel after 2165f9ac40cf212891b11a75bd9b9b2f4f0b8dc3
|
https://github.com/ansible/ansible/issues/69634
|
https://github.com/ansible/ansible/pull/69629
|
de3f7c7739851852dec8ea99a76c353317270b70
|
fe9696be525d4ef3177decda6919206492977582
| 2020-05-20T21:30:38Z |
python
| 2020-05-22T13:34:26Z |
lib/ansible/executor/task_executor.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import re
import pty
import time
import json
import signal
import subprocess
import sys
import termios
import traceback
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleConnectionFailure, AnsibleActionFail, AnsibleActionSkip
from ansible.executor.task_result import TaskResult
from ansible.executor.module_common import get_action_args_with_defaults
from ansible.module_utils.six import iteritems, string_types, binary_type
from ansible.module_utils.six.moves import xrange
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.connection import write_to_file_descriptor
from ansible.playbook.conditional import Conditional
from ansible.playbook.task import Task
from ansible.plugins.loader import become_loader, cliconf_loader, connection_loader, httpapi_loader, netconf_loader, terminal_loader
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionLoader
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.unsafe_proxy import to_unsafe_text, wrap_var
from ansible.vars.clean import namespace_facts, clean_facts
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars, isidentifier
display = Display()
__all__ = ['TaskExecutor']
class TaskTimeoutError(BaseException):
pass
def task_timeout(signum, frame):
raise TaskTimeoutError
def remove_omit(task_args, omit_token):
'''
Remove args with a value equal to the ``omit_token`` recursively
to align with now having suboptions in the argument_spec
'''
if not isinstance(task_args, dict):
return task_args
new_args = {}
for i in iteritems(task_args):
if i[1] == omit_token:
continue
elif isinstance(i[1], dict):
new_args[i[0]] = remove_omit(i[1], omit_token)
elif isinstance(i[1], list):
new_args[i[0]] = [remove_omit(v, omit_token) for v in i[1]]
else:
new_args[i[0]] = i[1]
return new_args
class TaskExecutor:
'''
This is the main worker class for the executor pipeline, which
handles loading an action plugin to actually dispatch the task to
a given host. This class roughly corresponds to the old Runner()
class.
'''
# Modules that we optimize by squashing loop items into a single call to
# the module
SQUASH_ACTIONS = frozenset(C.DEFAULT_SQUASH_ACTIONS)
def __init__(self, host, task, job_vars, play_context, new_stdin, loader, shared_loader_obj, final_q):
self._host = host
self._task = task
self._job_vars = job_vars
self._play_context = play_context
self._new_stdin = new_stdin
self._loader = loader
self._shared_loader_obj = shared_loader_obj
self._connection = None
self._final_q = final_q
self._loop_eval_error = None
self._task.squash()
def run(self):
'''
The main executor entrypoint, where we determine if the specified
task requires looping and either runs the task with self._run_loop()
or self._execute(). After that, the returned results are parsed and
returned as a dict.
'''
display.debug("in run() - task %s" % self._task._uuid)
try:
try:
items = self._get_loop_items()
except AnsibleUndefinedVariable as e:
# save the error raised here for use later
items = None
self._loop_eval_error = e
if items is not None:
if len(items) > 0:
item_results = self._run_loop(items)
# create the overall result item
res = dict(results=item_results)
# loop through the item results, and set the global changed/failed result flags based on any item.
for item in item_results:
if 'changed' in item and item['changed'] and not res.get('changed'):
res['changed'] = True
if 'failed' in item and item['failed']:
item_ignore = item.pop('_ansible_ignore_errors')
if not res.get('failed'):
res['failed'] = True
res['msg'] = 'One or more items failed'
self._task.ignore_errors = item_ignore
elif self._task.ignore_errors and not item_ignore:
self._task.ignore_errors = item_ignore
# ensure to accumulate these
for array in ['warnings', 'deprecations']:
if array in item and item[array]:
if array not in res:
res[array] = []
if not isinstance(item[array], list):
item[array] = [item[array]]
res[array] = res[array] + item[array]
del item[array]
if not res.get('Failed', False):
res['msg'] = 'All items completed'
else:
res = dict(changed=False, skipped=True, skipped_reason='No items in the list', results=[])
else:
display.debug("calling self._execute()")
res = self._execute()
display.debug("_execute() done")
# make sure changed is set in the result, if it's not present
if 'changed' not in res:
res['changed'] = False
def _clean_res(res, errors='surrogate_or_strict'):
if isinstance(res, binary_type):
return to_unsafe_text(res, errors=errors)
elif isinstance(res, dict):
for k in res:
try:
res[k] = _clean_res(res[k], errors=errors)
except UnicodeError:
if k == 'diff':
# If this is a diff, substitute a replacement character if the value
# is undecodable as utf8. (Fix #21804)
display.warning("We were unable to decode all characters in the module return data."
" Replaced some in an effort to return as much as possible")
res[k] = _clean_res(res[k], errors='surrogate_then_replace')
else:
raise
elif isinstance(res, list):
for idx, item in enumerate(res):
res[idx] = _clean_res(item, errors=errors)
return res
display.debug("dumping result to json")
res = _clean_res(res)
display.debug("done dumping result, returning")
return res
except AnsibleError as e:
return dict(failed=True, msg=wrap_var(to_text(e, nonstring='simplerepr')), _ansible_no_log=self._play_context.no_log)
except Exception as e:
return dict(failed=True, msg='Unexpected failure during module execution.', exception=to_text(traceback.format_exc()),
stdout='', _ansible_no_log=self._play_context.no_log)
finally:
try:
self._connection.close()
except AttributeError:
pass
except Exception as e:
display.debug(u"error closing connection: %s" % to_text(e))
def _get_loop_items(self):
'''
Loads a lookup plugin to handle the with_* portion of a task (if specified),
and returns the items result.
'''
# save the play context variables to a temporary dictionary,
# so that we can modify the job vars without doing a full copy
# and later restore them to avoid modifying things too early
play_context_vars = dict()
self._play_context.update_vars(play_context_vars)
old_vars = dict()
for k in play_context_vars:
if k in self._job_vars:
old_vars[k] = self._job_vars[k]
self._job_vars[k] = play_context_vars[k]
# get search path for this task to pass to lookup plugins
self._job_vars['ansible_search_path'] = self._task.get_search_path()
# ensure basedir is always in (dwim already searches here but we need to display it)
if self._loader.get_basedir() not in self._job_vars['ansible_search_path']:
self._job_vars['ansible_search_path'].append(self._loader.get_basedir())
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=self._job_vars)
items = None
loop_cache = self._job_vars.get('_ansible_loop_cache')
if loop_cache is not None:
# _ansible_loop_cache may be set in `get_vars` when calculating `delegate_to`
# to avoid reprocessing the loop
items = loop_cache
elif self._task.loop_with:
if self._task.loop_with in self._shared_loader_obj.lookup_loader:
fail = True
if self._task.loop_with == 'first_found':
# first_found loops are special. If the item is undefined then we want to fall through to the next value rather than failing.
fail = False
loop_terms = listify_lookup_plugin_terms(terms=self._task.loop, templar=templar, loader=self._loader, fail_on_undefined=fail,
convert_bare=False)
if not fail:
loop_terms = [t for t in loop_terms if not templar.is_template(t)]
# get lookup
mylookup = self._shared_loader_obj.lookup_loader.get(self._task.loop_with, loader=self._loader, templar=templar)
# give lookup task 'context' for subdir (mostly needed for first_found)
for subdir in ['template', 'var', 'file']: # TODO: move this to constants?
if subdir in self._task.action:
break
setattr(mylookup, '_subdir', subdir + 's')
# run lookup
items = wrap_var(mylookup.run(terms=loop_terms, variables=self._job_vars, wantlist=True))
else:
raise AnsibleError("Unexpected failure in finding the lookup named '%s' in the available lookup plugins" % self._task.loop_with)
elif self._task.loop is not None:
items = templar.template(self._task.loop)
if not isinstance(items, list):
raise AnsibleError(
"Invalid data passed to 'loop', it requires a list, got this instead: %s."
" Hint: If you passed a list/dict of just one element,"
" try adding wantlist=True to your lookup invocation or use q/query instead of lookup." % items
)
# now we restore any old job variables that may have been modified,
# and delete them if they were in the play context vars but not in
# the old variables dictionary
for k in play_context_vars:
if k in old_vars:
self._job_vars[k] = old_vars[k]
else:
del self._job_vars[k]
return items
def _run_loop(self, items):
'''
Runs the task with the loop items specified and collates the result
into an array named 'results' which is inserted into the final result
along with the item for which the loop ran.
'''
results = []
# make copies of the job vars and task so we can add the item to
# the variables and re-validate the task with the item variable
# task_vars = self._job_vars.copy()
task_vars = self._job_vars
loop_var = 'item'
index_var = None
label = None
loop_pause = 0
extended = False
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=self._job_vars)
# FIXME: move this to the object itself to allow post_validate to take care of templating (loop_control.post_validate)
if self._task.loop_control:
loop_var = templar.template(self._task.loop_control.loop_var)
index_var = templar.template(self._task.loop_control.index_var)
loop_pause = templar.template(self._task.loop_control.pause)
extended = templar.template(self._task.loop_control.extended)
# This may be 'None',so it is templated below after we ensure a value and an item is assigned
label = self._task.loop_control.label
# ensure we always have a label
if label is None:
label = '{{' + loop_var + '}}'
if loop_var in task_vars:
display.warning(u"The loop variable '%s' is already in use. "
u"You should set the `loop_var` value in the `loop_control` option for the task"
u" to something else to avoid variable collisions and unexpected behavior." % loop_var)
ran_once = False
if self._task.loop_with:
# Only squash with 'with_:' not with the 'loop:', 'magic' squashing can be removed once with_ loops are
items = self._squash_items(items, loop_var, task_vars)
no_log = False
items_len = len(items)
for item_index, item in enumerate(items):
task_vars['ansible_loop_var'] = loop_var
task_vars[loop_var] = item
if index_var:
task_vars['ansible_index_var'] = index_var
task_vars[index_var] = item_index
if extended:
task_vars['ansible_loop'] = {
'allitems': items,
'index': item_index + 1,
'index0': item_index,
'first': item_index == 0,
'last': item_index + 1 == items_len,
'length': items_len,
'revindex': items_len - item_index,
'revindex0': items_len - item_index - 1,
}
try:
task_vars['ansible_loop']['nextitem'] = items[item_index + 1]
except IndexError:
pass
if item_index - 1 >= 0:
task_vars['ansible_loop']['previtem'] = items[item_index - 1]
# Update template vars to reflect current loop iteration
templar.available_variables = task_vars
# pause between loop iterations
if loop_pause and ran_once:
try:
time.sleep(float(loop_pause))
except ValueError as e:
raise AnsibleError('Invalid pause value: %s, produced error: %s' % (loop_pause, to_native(e)))
else:
ran_once = True
try:
tmp_task = self._task.copy(exclude_parent=True, exclude_tasks=True)
tmp_task._parent = self._task._parent
tmp_play_context = self._play_context.copy()
except AnsibleParserError as e:
results.append(dict(failed=True, msg=to_text(e)))
continue
# now we swap the internal task and play context with their copies,
# execute, and swap them back so we can do the next iteration cleanly
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
res = self._execute(variables=task_vars)
task_fields = self._task.dump_attrs()
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
# update 'general no_log' based on specific no_log
no_log = no_log or tmp_task.no_log
# now update the result with the item info, and append the result
# to the list of results
res[loop_var] = item
res['ansible_loop_var'] = loop_var
if index_var:
res[index_var] = item_index
res['ansible_index_var'] = index_var
if extended:
res['ansible_loop'] = task_vars['ansible_loop']
res['_ansible_item_result'] = True
res['_ansible_ignore_errors'] = task_fields.get('ignore_errors')
# gets templated here unlike rest of loop_control fields, depends on loop_var above
try:
res['_ansible_item_label'] = templar.template(label, cache=False)
except AnsibleUndefinedVariable as e:
res.update({
'failed': True,
'msg': 'Failed to template loop_control.label: %s' % to_text(e)
})
self._final_q.put(
TaskResult(
self._host.name,
self._task._uuid,
res,
task_fields=task_fields,
),
block=False,
)
results.append(res)
del task_vars[loop_var]
# clear 'connection related' plugin variables for next iteration
if self._connection:
clear_plugins = {
'connection': self._connection._load_name,
'shell': self._connection._shell._load_name
}
if self._connection.become:
clear_plugins['become'] = self._connection.become._load_name
for plugin_type, plugin_name in iteritems(clear_plugins):
for var in C.config.get_plugin_vars(plugin_type, plugin_name):
if var in task_vars and var not in self._job_vars:
del task_vars[var]
self._task.no_log = no_log
return results
def _squash_items(self, items, loop_var, variables):
'''
Squash items down to a comma-separated list for certain modules which support it
(typically package management modules).
'''
name = None
try:
# _task.action could contain templatable strings (via action: and
# local_action:) Template it before comparing. If we don't end up
# optimizing it here, the templatable string might use template vars
# that aren't available until later (it could even use vars from the
# with_items loop) so don't make the templated string permanent yet.
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=variables)
task_action = self._task.action
if templar.is_template(task_action):
task_action = templar.template(task_action, fail_on_undefined=False)
if len(items) > 0 and task_action in self.SQUASH_ACTIONS:
if all(isinstance(o, string_types) for o in items):
final_items = []
found = None
for allowed in ['name', 'pkg', 'package']:
name = self._task.args.pop(allowed, None)
if name is not None:
found = allowed
break
# This gets the information to check whether the name field
# contains a template that we can squash for
template_no_item = template_with_item = None
if name:
if templar.is_template(name):
variables[loop_var] = '\0$'
template_no_item = templar.template(name, variables, cache=False)
variables[loop_var] = '\0@'
template_with_item = templar.template(name, variables, cache=False)
del variables[loop_var]
# Check if the user is doing some operation that doesn't take
# name/pkg or the name/pkg field doesn't have any variables
# and thus the items can't be squashed
if template_no_item != template_with_item:
if self._task.loop_with and self._task.loop_with not in ('items', 'list'):
value_text = "\"{{ query('%s', %r) }}\"" % (self._task.loop_with, self._task.loop)
else:
value_text = '%r' % self._task.loop
# Without knowing the data structure well, it's easiest to strip python2 unicode
# literals after stringifying
value_text = re.sub(r"\bu'", "'", value_text)
display.deprecated(
'Invoking "%s" only once while using a loop via squash_actions is deprecated. '
'Instead of using a loop to supply multiple items and specifying `%s: "%s"`, '
'please use `%s: %s` and remove the loop' % (self._task.action, found, name, found, value_text),
version='2.11'
)
for item in items:
variables[loop_var] = item
if self._task.evaluate_conditional(templar, variables):
new_item = templar.template(name, cache=False)
final_items.append(new_item)
self._task.args['name'] = final_items
# Wrap this in a list so that the calling function loop
# executes exactly once
return [final_items]
else:
# Restore the name parameter
self._task.args['name'] = name
# elif:
# Right now we only optimize single entries. In the future we
# could optimize more types:
# * lists can be squashed together
# * dicts could squash entries that match in all cases except the
# name or pkg field.
except Exception:
# Squashing is an optimization. If it fails for any reason,
# simply use the unoptimized list of items.
# Restore the name parameter
if name is not None:
self._task.args['name'] = name
return items
def _execute(self, variables=None):
'''
The primary workhorse of the executor system, this runs the task
on the specified host (which may be the delegated_to host) and handles
the retry/until and block rescue/always execution
'''
if variables is None:
variables = self._job_vars
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=variables)
context_validation_error = None
try:
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
self._play_context = self._play_context.set_task_and_variable_override(task=self._task, variables=variables, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
self._play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not self._play_context.remote_addr:
self._play_context.remote_addr = self._host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist.
self._play_context.update_vars(variables)
# FIXME: update connection/shell plugin options
except AnsibleError as e:
# save the error, which we'll raise later if we don't end up
# skipping this task during the conditional evaluation step
context_validation_error = e
# Evaluate the conditional (if any) for this task, which we do before running
# the final task post-validation. We do this before the post validation due to
# the fact that the conditional may specify that the task be skipped due to a
# variable not being present which would otherwise cause validation to fail
try:
if not self._task.evaluate_conditional(templar, variables):
display.debug("when evaluation is False, skipping this task")
return dict(changed=False, skipped=True, skip_reason='Conditional result was False', _ansible_no_log=self._play_context.no_log)
except AnsibleError:
# loop error takes precedence
if self._loop_eval_error is not None:
raise self._loop_eval_error # pylint: disable=raising-bad-type
raise
# Not skipping, if we had loop error raised earlier we need to raise it now to halt the execution of this task
if self._loop_eval_error is not None:
raise self._loop_eval_error # pylint: disable=raising-bad-type
# if we ran into an error while setting up the PlayContext, raise it now
if context_validation_error is not None:
raise context_validation_error # pylint: disable=raising-bad-type
# if this task is a TaskInclude, we just return now with a success code so the
# main thread can expand the task list for the given host
if self._task.action in ('include', 'include_tasks'):
include_args = self._task.args.copy()
include_file = include_args.pop('_raw_params', None)
if not include_file:
return dict(failed=True, msg="No include file was specified to the include")
include_file = templar.template(include_file)
return dict(include=include_file, include_args=include_args)
# if this task is a IncludeRole, we just return now with a success code so the main thread can expand the task list for the given host
elif self._task.action == 'include_role':
include_args = self._task.args.copy()
return dict(include_args=include_args)
# Now we do final validation on the task, which sets all fields to their final values.
self._task.post_validate(templar=templar)
if '_variable_params' in self._task.args:
variable_params = self._task.args.pop('_variable_params')
if isinstance(variable_params, dict):
if C.INJECT_FACTS_AS_VARS:
display.warning("Using a variable for a task's 'args' is unsafe in some situations "
"(see https://docs.ansible.com/ansible/devel/reference_appendices/faq.html#argsplat-unsafe)")
variable_params.update(self._task.args)
self._task.args = variable_params
# get the connection and the handler for this execution
if (not self._connection or
not getattr(self._connection, 'connected', False) or
self._play_context.remote_addr != self._connection._play_context.remote_addr):
self._connection = self._get_connection(variables=variables, templar=templar)
else:
# if connection is reused, its _play_context is no longer valid and needs
# to be replaced with the one templated above, in case other data changed
self._connection._play_context = self._play_context
if self._task.delegate_to:
# use vars from delegated host (which already include task vars) instead of original host
delegated_vars = variables.get('ansible_delegated_vars', {}).get(self._task.delegate_to, {})
orig_vars = templar.available_variables
templar.available_variables = delegated_vars
plugin_vars = self._set_connection_options(delegated_vars, templar)
templar.available_variables = orig_vars
else:
# just use normal host vars
plugin_vars = self._set_connection_options(variables, templar)
# get handler
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
# Apply default params for action/module, if present
self._task.args = get_action_args_with_defaults(self._task.action, self._task.args, self._task.module_defaults, templar)
# And filter out any fields which were set to default(omit), and got the omit token value
omit_token = variables.get('omit')
if omit_token is not None:
self._task.args = remove_omit(self._task.args, omit_token)
# Read some values from the task, so that we can modify them if need be
if self._task.until:
retries = self._task.retries
if retries is None:
retries = 3
elif retries <= 0:
retries = 1
else:
retries += 1
else:
retries = 1
delay = self._task.delay
if delay < 0:
delay = 1
# make a copy of the job vars here, in case we need to update them
# with the registered variable value later on when testing conditions
vars_copy = variables.copy()
display.debug("starting attempt loop")
result = None
for attempt in xrange(1, retries + 1):
display.debug("running the handler")
try:
if self._task.timeout:
old_sig = signal.signal(signal.SIGALRM, task_timeout)
signal.alarm(self._task.timeout)
result = self._handler.run(task_vars=variables)
except AnsibleActionSkip as e:
return dict(skipped=True, msg=to_text(e))
except AnsibleActionFail as e:
return dict(failed=True, msg=to_text(e))
except AnsibleConnectionFailure as e:
return dict(unreachable=True, msg=to_text(e))
except TaskTimeoutError as e:
msg = 'The %s action failed to execute in the expected time frame (%d) and was terminated' % (self._task.action, self._task.timeout)
return dict(failed=True, msg=msg)
finally:
if self._task.timeout:
signal.alarm(0)
old_sig = signal.signal(signal.SIGALRM, old_sig)
self._handler.cleanup()
display.debug("handler run complete")
# preserve no log
result["_ansible_no_log"] = self._play_context.no_log
# update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
if self._task.register:
if not isidentifier(self._task.register):
raise AnsibleError("Invalid variable name in 'register' specified: '%s'" % self._task.register)
vars_copy[self._task.register] = result = wrap_var(result)
if self._task.async_val > 0:
if self._task.poll > 0 and not result.get('skipped') and not result.get('failed'):
result = self._poll_async_result(result=result, templar=templar, task_vars=vars_copy)
# FIXME callback 'v2_runner_on_async_poll' here
# ensure no log is preserved
result["_ansible_no_log"] = self._play_context.no_log
# helper methods for use below in evaluating changed/failed_when
def _evaluate_changed_when_result(result):
if self._task.changed_when is not None and self._task.changed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.changed_when
result['changed'] = cond.evaluate_conditional(templar, vars_copy)
def _evaluate_failed_when_result(result):
if self._task.failed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.failed_when
failed_when_result = cond.evaluate_conditional(templar, vars_copy)
result['failed_when_result'] = result['failed'] = failed_when_result
else:
failed_when_result = False
return failed_when_result
if 'ansible_facts' in result:
if self._task.action in ('set_fact', 'include_vars'):
vars_copy.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
vars_copy.update(namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
vars_copy.update(clean_facts(af))
# set the failed property if it was missing.
if 'failed' not in result:
# rc is here for backwards compatibility and modules that use it instead of 'failed'
if 'rc' in result and result['rc'] not in [0, "0"]:
result['failed'] = True
else:
result['failed'] = False
# Make attempts and retries available early to allow their use in changed/failed_when
if self._task.until:
result['attempts'] = attempt
# set the changed property if it was missing.
if 'changed' not in result:
result['changed'] = False
# re-update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
# This gives changed/failed_when access to additional recently modified
# attributes of result
if self._task.register:
vars_copy[self._task.register] = result = wrap_var(result)
# if we didn't skip this task, use the helpers to evaluate the changed/
# failed_when properties
if 'skipped' not in result:
_evaluate_changed_when_result(result)
_evaluate_failed_when_result(result)
if retries > 1:
cond = Conditional(loader=self._loader)
cond.when = self._task.until
if cond.evaluate_conditional(templar, vars_copy):
break
else:
# no conditional check, or it failed, so sleep for the specified time
if attempt < retries:
result['_ansible_retry'] = True
result['retries'] = retries
display.debug('Retrying task, attempt %d of %d' % (attempt, retries))
self._final_q.put(TaskResult(self._host.name, self._task._uuid, result, task_fields=self._task.dump_attrs()), block=False)
time.sleep(delay)
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
else:
if retries > 1:
# we ran out of attempts, so mark the result as failed
result['attempts'] = retries - 1
result['failed'] = True
# do the final update of the local variables here, for both registered
# values and any facts which may have been created
if self._task.register:
variables[self._task.register] = result = wrap_var(result)
if 'ansible_facts' in result:
if self._task.action in ('set_fact', 'include_vars'):
variables.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
variables.update(namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
variables.update(clean_facts(af))
# save the notification target in the result, if it was specified, as
# this task may be running in a loop in which case the notification
# may be item-specific, ie. "notify: service {{item}}"
if self._task.notify is not None:
result['_ansible_notify'] = self._task.notify
# add the delegated vars to the result, so we can reference them
# on the results side without having to do any further templating
if self._task.delegate_to:
result["_ansible_delegated_vars"] = {'ansible_delegated_host': self._task.delegate_to}
for k in plugin_vars:
result["_ansible_delegated_vars"][k] = delegated_vars.get(k)
# and return
display.debug("attempt loop complete, returning result")
return result
def _poll_async_result(self, result, templar, task_vars=None):
'''
Polls for the specified JID to be complete
'''
if task_vars is None:
task_vars = self._job_vars
async_jid = result.get('ansible_job_id')
if async_jid is None:
return dict(failed=True, msg="No job id was returned by the async task")
# Create a new pseudo-task to run the async_status module, and run
# that (with a sleep for "poll" seconds between each retry) until the
# async time limit is exceeded.
async_task = Task().load(dict(action='async_status jid=%s' % async_jid, environment=self._task.environment))
# FIXME: this is no longer the case, normal takes care of all, see if this can just be generalized
# Because this is an async task, the action handler is async. However,
# we need the 'normal' action handler for the status check, so get it
# now via the action_loader
async_handler = self._shared_loader_obj.action_loader.get(
'async_status',
task=async_task,
connection=self._connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
)
time_left = self._task.async_val
while time_left > 0:
time.sleep(self._task.poll)
try:
async_result = async_handler.run(task_vars=task_vars)
# We do not bail out of the loop in cases where the failure
# is associated with a parsing error. The async_runner can
# have issues which result in a half-written/unparseable result
# file on disk, which manifests to the user as a timeout happening
# before it's time to timeout.
if (int(async_result.get('finished', 0)) == 1 or
('failed' in async_result and async_result.get('_ansible_parsed', False)) or
'skipped' in async_result):
break
except Exception as e:
# Connections can raise exceptions during polling (eg, network bounce, reboot); these should be non-fatal.
# On an exception, call the connection's reset method if it has one
# (eg, drop/recreate WinRM connection; some reused connections are in a broken state)
display.vvvv("Exception during async poll, retrying... (%s)" % to_text(e))
display.debug("Async poll exception was:\n%s" % to_text(traceback.format_exc()))
try:
async_handler._connection.reset()
except AttributeError:
pass
# Little hack to raise the exception if we've exhausted the timeout period
time_left -= self._task.poll
if time_left <= 0:
raise
else:
time_left -= self._task.poll
if int(async_result.get('finished', 0)) != 1:
if async_result.get('_ansible_parsed'):
return dict(failed=True, msg="async task did not complete within the requested time - %ss" % self._task.async_val)
else:
return dict(failed=True, msg="async task produced unparseable results", async_result=async_result)
else:
async_handler.cleanup(force=True)
return async_result
def _get_become(self, name):
become = become_loader.get(name)
if not become:
raise AnsibleError("Invalid become method specified, could not find matching plugin: '%s'. "
"Use `ansible-doc -t become -l` to list available plugins." % name)
return become
def _get_connection(self, variables, templar):
'''
Reads the connection property for the host, and returns the
correct connection object from the list of connection plugins
'''
if self._task.delegate_to is not None:
cvars = variables.get('ansible_delegated_vars', {}).get(self._task.delegate_to, {})
else:
cvars = variables
# use magic var if it exists, if not, let task inheritance do it's thing.
self._play_context.connection = cvars.get('ansible_connection', self._task.connection)
# TODO: play context has logic to update the conneciton for 'smart'
# (default value, will chose between ssh and paramiko) and 'persistent'
# (really paramiko), evnentually this should move to task object itself.
connection_name = self._play_context.connection
# load connection
conn_type = connection_name
connection = self._shared_loader_obj.connection_loader.get(
conn_type,
self._play_context,
self._new_stdin,
task_uuid=self._task._uuid,
ansible_playbook_pid=to_text(os.getppid())
)
if not connection:
raise AnsibleError("the connection plugin '%s' was not found" % conn_type)
# load become plugin if needed
if cvars.get('ansible_become', self._task.become):
become_plugin = self._get_become(cvars.get('ansible_become_method', self._task.become_method))
try:
connection.set_become_plugin(become_plugin)
except AttributeError:
# Older connection plugin that does not support set_become_plugin
pass
if getattr(connection.become, 'require_tty', False) and not getattr(connection, 'has_tty', False):
raise AnsibleError(
"The '%s' connection does not provide a TTY which is required for the selected "
"become plugin: %s." % (conn_type, become_plugin.name)
)
# Backwards compat for connection plugins that don't support become plugins
# Just do this unconditionally for now, we could move it inside of the
# AttributeError above later
self._play_context.set_become_plugin(become_plugin.name)
# Also backwards compat call for those still using play_context
self._play_context.set_attributes_from_plugin(connection)
if any(((connection.supports_persistence and C.USE_PERSISTENT_CONNECTIONS), connection.force_persistence)):
self._play_context.timeout = connection.get_option('persistent_command_timeout')
display.vvvv('attempting to start connection', host=self._play_context.remote_addr)
display.vvvv('using connection plugin %s' % connection.transport, host=self._play_context.remote_addr)
options = self._get_persistent_connection_options(connection, variables, templar)
socket_path = start_connection(self._play_context, options, self._task._uuid)
display.vvvv('local domain socket path is %s' % socket_path, host=self._play_context.remote_addr)
setattr(connection, '_socket_path', socket_path)
return connection
def _get_persistent_connection_options(self, connection, variables, templar):
final_vars = combine_vars(variables, variables.get('ansible_delegated_vars', dict()).get(self._task.delegate_to, dict()))
option_vars = C.config.get_plugin_vars('connection', connection._load_name)
plugin = connection._sub_plugin
if plugin.get('type'):
option_vars.extend(C.config.get_plugin_vars(plugin['type'], plugin['name']))
options = {}
for k in option_vars:
if k in final_vars:
options[k] = templar.template(final_vars[k])
return options
def _set_plugin_options(self, plugin_type, variables, templar, task_keys):
try:
plugin = getattr(self._connection, '_%s' % plugin_type)
except AttributeError:
# Some plugins are assigned to private attrs, ``become`` is not
plugin = getattr(self._connection, plugin_type)
option_vars = C.config.get_plugin_vars(plugin_type, plugin._load_name)
options = {}
for k in option_vars:
if k in variables:
options[k] = templar.template(variables[k])
# TODO move to task method?
plugin.set_options(task_keys=task_keys, var_options=options)
return option_vars
def _set_connection_options(self, variables, templar):
# keep list of variable names possibly consumed
varnames = []
# grab list of usable vars for this plugin
option_vars = C.config.get_plugin_vars('connection', self._connection._load_name)
varnames.extend(option_vars)
# create dict of 'templated vars'
options = {'_extras': {}}
for k in option_vars:
if k in variables:
options[k] = templar.template(variables[k])
# add extras if plugin supports them
if getattr(self._connection, 'allow_extras', False):
for k in variables:
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
options['_extras'][k] = templar.template(variables[k])
task_keys = self._task.dump_attrs()
# set options with 'templated vars' specific to this plugin and dependant ones
self._connection.set_options(task_keys=task_keys, var_options=options)
varnames.extend(self._set_plugin_options('shell', variables, templar, task_keys))
if self._connection.become is not None:
varnames.extend(self._set_plugin_options('become', variables, templar, task_keys))
# FIXME: eventually remove from task and play_context, here for backwards compat
# keep out of play objects to avoid accidental disclosure, only become plugin should have
task_keys['become_pass'] = self._connection.become.get_option('become_pass')
# FOR BACKWARDS COMPAT:
for option in ('become_user', 'become_flags', 'become_exe', 'become_pass'):
try:
setattr(self._play_context, option, self._connection.become.get_option(option))
except KeyError:
pass # some plugins don't support all base flags
self._play_context.prompt = self._connection.become.prompt
return varnames
def _get_action_handler(self, connection, templar):
'''
Returns the correct action plugin to handle the requestion task action
'''
module_collection, separator, module_name = self._task.action.rpartition(".")
module_prefix = module_name.split('_')[0]
if module_collection:
# For network modules, which look for one action plugin per platform, look for the
# action plugin in the same collection as the module by prefixing the action plugin
# with the same collection.
network_action = "{0}.{1}".format(module_collection, module_prefix)
else:
network_action = module_prefix
collections = self._task.collections
# let action plugin override module, fallback to 'normal' action plugin otherwise
if self._shared_loader_obj.action_loader.has_plugin(self._task.action, collection_list=collections):
handler_name = self._task.action
elif all((module_prefix in C.NETWORK_GROUP_MODULES, self._shared_loader_obj.action_loader.has_plugin(network_action, collection_list=collections))):
handler_name = network_action
else:
# FUTURE: once we're comfortable with collections impl, preface this action with ansible.builtin so it can't be hijacked
handler_name = 'normal'
collections = None # until then, we don't want the task's collection list to be consulted; use the builtin
handler = self._shared_loader_obj.action_loader.get(
handler_name,
task=self._task,
connection=connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
collection_list=collections
)
if not handler:
raise AnsibleError("the handler '%s' was not found" % handler_name)
return handler
def start_connection(play_context, variables, task_uuid):
'''
Starts the persistent connection
'''
candidate_paths = [C.ANSIBLE_CONNECTION_PATH or os.path.dirname(sys.argv[0])]
candidate_paths.extend(os.environ['PATH'].split(os.pathsep))
for dirname in candidate_paths:
ansible_connection = os.path.join(dirname, 'ansible-connection')
if os.path.isfile(ansible_connection):
display.vvvv("Found ansible-connection at path {0}".format(ansible_connection))
break
else:
raise AnsibleError("Unable to find location of 'ansible-connection'. "
"Please set or check the value of ANSIBLE_CONNECTION_PATH")
env = os.environ.copy()
env.update({
# HACK; most of these paths may change during the controller's lifetime
# (eg, due to late dynamic role includes, multi-playbook execution), without a way
# to invalidate/update, ansible-connection won't always see the same plugins the controller
# can.
'ANSIBLE_BECOME_PLUGINS': become_loader.print_paths(),
'ANSIBLE_CLICONF_PLUGINS': cliconf_loader.print_paths(),
'ANSIBLE_COLLECTIONS_PATHS': os.pathsep.join(AnsibleCollectionLoader().n_collection_paths),
'ANSIBLE_CONNECTION_PLUGINS': connection_loader.print_paths(),
'ANSIBLE_HTTPAPI_PLUGINS': httpapi_loader.print_paths(),
'ANSIBLE_NETCONF_PLUGINS': netconf_loader.print_paths(),
'ANSIBLE_TERMINAL_PLUGINS': terminal_loader.print_paths(),
})
python = sys.executable
master, slave = pty.openpty()
p = subprocess.Popen(
[python, ansible_connection, to_text(os.getppid()), to_text(task_uuid)],
stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env
)
os.close(slave)
# We need to set the pty into noncanonical mode. This ensures that we
# can receive lines longer than 4095 characters (plus newline) without
# truncating.
old = termios.tcgetattr(master)
new = termios.tcgetattr(master)
new[3] = new[3] & ~termios.ICANON
try:
termios.tcsetattr(master, termios.TCSANOW, new)
write_to_file_descriptor(master, variables)
write_to_file_descriptor(master, play_context.serialize())
(stdout, stderr) = p.communicate()
finally:
termios.tcsetattr(master, termios.TCSANOW, old)
os.close(master)
if p.returncode == 0:
result = json.loads(to_text(stdout, errors='surrogate_then_replace'))
else:
try:
result = json.loads(to_text(stderr, errors='surrogate_then_replace'))
except getattr(json.decoder, 'JSONDecodeError', ValueError):
# JSONDecodeError only available on Python 3.5+
result = {'error': to_text(stderr, errors='surrogate_then_replace')}
if 'messages' in result:
for level, message in result['messages']:
if level == 'log':
display.display(message, log_only=True)
elif level in ('debug', 'v', 'vv', 'vvv', 'vvvv', 'vvvvv', 'vvvvvv'):
getattr(display, level)(message, host=play_context.remote_addr)
else:
if hasattr(display, level):
getattr(display, level)(message)
else:
display.vvvv(message, host=play_context.remote_addr)
if 'error' in result:
if play_context.verbosity > 2:
if result.get('exception'):
msg = "The full traceback is:\n" + result['exception']
display.display(msg, color=C.COLOR_ERROR)
raise AnsibleError(result['error'])
return result['socket_path']
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,634 |
-K option no longer works for become_password
|
##### SUMMARY
After 2165f9ac40cf212891b11a75bd9b9b2f4f0b8dc3, `-K` no longer works for become password.
#69629 has a fix to send it to the plugin and get things working again, but @bcoca suggested we should try to do something other than threading it through TQM -> play_context.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
become, task_executor
##### ANSIBLE VERSION
devel after 2165f9ac40cf212891b11a75bd9b9b2f4f0b8dc3
|
https://github.com/ansible/ansible/issues/69634
|
https://github.com/ansible/ansible/pull/69629
|
de3f7c7739851852dec8ea99a76c353317270b70
|
fe9696be525d4ef3177decda6919206492977582
| 2020-05-20T21:30:38Z |
python
| 2020-05-22T13:34:26Z |
test/integration/targets/cli/aliases
|
needs/target/setup_pexpect
shippable/posix/group3
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,634 |
-K option no longer works for become_password
|
##### SUMMARY
After 2165f9ac40cf212891b11a75bd9b9b2f4f0b8dc3, `-K` no longer works for become password.
#69629 has a fix to send it to the plugin and get things working again, but @bcoca suggested we should try to do something other than threading it through TQM -> play_context.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
become, task_executor
##### ANSIBLE VERSION
devel after 2165f9ac40cf212891b11a75bd9b9b2f4f0b8dc3
|
https://github.com/ansible/ansible/issues/69634
|
https://github.com/ansible/ansible/pull/69629
|
de3f7c7739851852dec8ea99a76c353317270b70
|
fe9696be525d4ef3177decda6919206492977582
| 2020-05-20T21:30:38Z |
python
| 2020-05-22T13:34:26Z |
test/integration/targets/cli/setup.yml
|
- hosts: localhost
gather_facts: no
roles:
- setup_pexpect
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,634 |
-K option no longer works for become_password
|
##### SUMMARY
After 2165f9ac40cf212891b11a75bd9b9b2f4f0b8dc3, `-K` no longer works for become password.
#69629 has a fix to send it to the plugin and get things working again, but @bcoca suggested we should try to do something other than threading it through TQM -> play_context.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
become, task_executor
##### ANSIBLE VERSION
devel after 2165f9ac40cf212891b11a75bd9b9b2f4f0b8dc3
|
https://github.com/ansible/ansible/issues/69634
|
https://github.com/ansible/ansible/pull/69629
|
de3f7c7739851852dec8ea99a76c353317270b70
|
fe9696be525d4ef3177decda6919206492977582
| 2020-05-20T21:30:38Z |
python
| 2020-05-22T13:34:26Z |
test/integration/targets/cli/test_k_and_K.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,649 |
Module Find returns empty list when setting path to "/" and depth to 1
|
##### SUMMARY
Module "find", when setting path to "/" and depth to 1. The list returned is empty. The reason is that the way currently used to calculate the depth being scanned, against the depth set on the parameter, gives a wrong value for objects under "/", such as "/var", attributing depth 2 to such objects. That way they are not included on the returned list.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
m:find
##### ANSIBLE VERSION
```
ansible 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
```
##### CONFIGURATION
```
```
##### OS / ENVIRONMENT
NAME="CentOS Linux"
VERSION="8 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Linux 8 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-8"
CENTOS_MANTISBT_PROJECT_VERSION="8"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="8"
##### STEPS TO REPRODUCE
{
"ANSIBLE_MODULE_ARGS": {
"depth": 1,
"recurse": true,
"paths": "/",
"file_type": "directory"
}
}
python3 -m ansible.modules.find /tmp/args.json
```
{"files": [], "changed": false, "msg": "", "matched": 0, "invocation": {"module_args": {"depth": 1, "hidden": false, "file_type": "directory", "excludes": null, "patterns": ["*"], "age": null, "get_checksum": false, "recurse": true, "follow": false, "use_regex": false, "size": null, "contains": null, "age_stamp": "mtime", "paths": ["/"]}}, "examined": 0}
```
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
A list with the directories below:
/ # ls -l
total 60
drwxr-xr-x 1 root root 4096 Apr 24 01:34 bin
drwxr-xr-x 5 root root 340 May 11 21:41 dev
drwxr-xr-x 1 root root 4096 May 11 21:41 etc
drwxr-xr-x 2 root root 4096 Apr 23 06:25 home
drwxr-xr-x 1 root root 4096 Apr 24 01:34 lib
drwxr-xr-x 5 root root 4096 Apr 23 06:25 media
drwxr-xr-x 2 root root 4096 Apr 23 06:25 mnt
drwxr-xr-x 2 root root 4096 Apr 23 06:25 opt
dr-xr-xr-x 206 root root 0 May 11 21:41 proc
drwx------ 1 root root 4096 May 11 21:43 root
drwxr-xr-x 1 root root 4096 May 11 21:41 run
drwxr-xr-x 2 root root 4096 Apr 23 06:25 sbin
drwxr-xr-x 2 root root 4096 Apr 23 06:25 srv
dr-xr-xr-x 13 root root 0 May 1 02:30 sys
drwxrwxrwt 1 root root 4096 May 21 17:22 tmp
drwxr-xr-x 1 root root 4096 Apr 24 01:34 usr
drwxr-xr-x 1 root root 4096 Apr 24 01:34 var
##### ACTUAL RESULTS
```
{"files": [], "changed": false, "msg": "", "matched": 0, "invocation": {"module_args": {"depth": 1, "hidden": false, "file_type": "directory", "excludes": null, "patterns": ["*"], "age": null, "get_checksum": false, "recurse": true, "follow": false, "use_regex": false, "size": null, "contains": null, "age_stamp": "mtime", "paths": ["/"]}}, "examined": 0}
```
|
https://github.com/ansible/ansible/issues/69649
|
https://github.com/ansible/ansible/pull/69650
|
dae3ba71a85ec39396f08235550e06b5c6fe739a
|
fdfa6fec75da14d7e145eccf7c092fba684ee1e2
| 2020-05-21T18:04:11Z |
python
| 2020-05-26T16:30:59Z |
lib/ansible/modules/find.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2014, Ruggero Marchei <[email protected]>
# Copyright: (c) 2015, Brian Coca <[email protected]>
# Copyright: (c) 2016-2017, Konstantin Shalygin <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'core'}
DOCUMENTATION = r'''
---
module: find
author: Brian Coca (@bcoca)
version_added: "2.0"
short_description: Return a list of files based on specific criteria
description:
- Return a list of files based on specific criteria. Multiple criteria are AND'd together.
- For Windows targets, use the M(win_find) module instead.
options:
age:
description:
- Select files whose age is equal to or greater than the specified time.
- Use a negative age to find files equal to or less than the specified time.
- You can choose seconds, minutes, hours, days, or weeks by specifying the
first letter of any of those words (e.g., "1w").
type: str
patterns:
default: '*'
description:
- One or more (shell or regex) patterns, which type is controlled by C(use_regex) option.
- The patterns restrict the list of files to be returned to those whose basenames match at
least one of the patterns specified. Multiple patterns can be specified using a list.
- The pattern is matched against the file base name, excluding the directory.
- When using regexen, the pattern MUST match the ENTIRE file name, not just parts of it. So
if you are looking to match all files ending in .default, you'd need to use '.*\.default'
as a regexp and not just '\.default'.
- This parameter expects a list, which can be either comma separated or YAML. If any of the
patterns contain a comma, make sure to put them in a list to avoid splitting the patterns
in undesirable ways.
type: list
aliases: [ pattern ]
excludes:
description:
- One or more (shell or regex) patterns, which type is controlled by C(use_regex) option.
- Items whose basenames match an C(excludes) pattern are culled from C(patterns) matches.
Multiple patterns can be specified using a list.
type: list
aliases: [ exclude ]
version_added: "2.5"
contains:
description:
- A regular expression or pattern which should be matched against the file content.
type: str
paths:
description:
- List of paths of directories to search. All paths must be fully qualified.
type: list
required: true
aliases: [ name, path ]
file_type:
description:
- Type of file to select.
- The 'link' and 'any' choices were added in Ansible 2.3.
type: str
choices: [ any, directory, file, link ]
default: file
recurse:
description:
- If target is a directory, recursively descend into the directory looking for files.
type: bool
default: no
size:
description:
- Select files whose size is equal to or greater than the specified size.
- Use a negative size to find files equal to or less than the specified size.
- Unqualified values are in bytes but b, k, m, g, and t can be appended to specify
bytes, kilobytes, megabytes, gigabytes, and terabytes, respectively.
- Size is not evaluated for directories.
age_stamp:
description:
- Choose the file property against which we compare age.
type: str
choices: [ atime, ctime, mtime ]
default: mtime
hidden:
description:
- Set this to C(yes) to include hidden files, otherwise they will be ignored.
type: bool
default: no
follow:
description:
- Set this to C(yes) to follow symlinks in path for systems with python 2.6+.
type: bool
default: no
get_checksum:
description:
- Set this to C(yes) to retrieve a file's SHA1 checksum.
type: bool
default: no
use_regex:
description:
- If C(no), the patterns are file globs (shell).
- If C(yes), they are python regexes.
type: bool
default: no
depth:
description:
- Set the maximum number of levels to descend into.
- Setting recurse to C(no) will override this value, which is effectively depth 1.
- Default is unlimited depth.
type: int
version_added: "2.6"
seealso:
- module: win_find
'''
EXAMPLES = r'''
- name: Recursively find /tmp files older than 2 days
find:
paths: /tmp
age: 2d
recurse: yes
- name: Recursively find /tmp files older than 4 weeks and equal or greater than 1 megabyte
find:
paths: /tmp
age: 4w
size: 1m
recurse: yes
- name: Recursively find /var/tmp files with last access time greater than 3600 seconds
find:
paths: /var/tmp
age: 3600
age_stamp: atime
recurse: yes
- name: Find /var/log files equal or greater than 10 megabytes ending with .old or .log.gz
find:
paths: /var/log
patterns: '*.old,*.log.gz'
size: 10m
# Note that YAML double quotes require escaping backslashes but yaml single quotes do not.
- name: Find /var/log files equal or greater than 10 megabytes ending with .old or .log.gz via regex
find:
paths: /var/log
patterns: "^.*?\\.(?:old|log\\.gz)$"
size: 10m
use_regex: yes
- name: Find /var/log all directories, exclude nginx and mysql
find:
paths: /var/log
recurse: no
file_type: directory
excludes: 'nginx,mysql'
# When using patterns that contain a comma, make sure they are formatted as lists to avoid splitting the pattern
- name: Use a single pattern that contains a comma formatted as a list
find:
paths: /var/log
file_type: file
use_regex: yes
patterns: ['^_[0-9]{2,4}_.*.log$']
- name: Use multiple patterns that contain a comma formatted as a YAML list
find:
paths: /var/log
file_type: file
use_regex: yes
patterns:
- '^_[0-9]{2,4}_.*.log$'
- '^[a-z]{1,5}_.*log$'
'''
RETURN = r'''
files:
description: All matches found with the specified criteria (see stat module for full output of each dictionary)
returned: success
type: list
sample: [
{ path: "/var/tmp/test1",
mode: "0644",
"...": "...",
checksum: 16fac7be61a6e4591a33ef4b729c5c3302307523
},
{ path: "/var/tmp/test2",
"...": "..."
},
]
matched:
description: Number of matches
returned: success
type: int
sample: 14
examined:
description: Number of filesystem objects looked at
returned: success
type: int
sample: 34
'''
import fnmatch
import grp
import os
import pwd
import re
import stat
import time
from ansible.module_utils.basic import AnsibleModule
def pfilter(f, patterns=None, excludes=None, use_regex=False):
'''filter using glob patterns'''
if patterns is None and excludes is None:
return True
if use_regex:
if patterns and excludes is None:
for p in patterns:
r = re.compile(p)
if r.match(f):
return True
elif patterns and excludes:
for p in patterns:
r = re.compile(p)
if r.match(f):
for e in excludes:
r = re.compile(e)
if r.match(f):
return False
return True
else:
if patterns and excludes is None:
for p in patterns:
if fnmatch.fnmatch(f, p):
return True
elif patterns and excludes:
for p in patterns:
if fnmatch.fnmatch(f, p):
for e in excludes:
if fnmatch.fnmatch(f, e):
return False
return True
return False
def agefilter(st, now, age, timestamp):
'''filter files older than age'''
if age is None:
return True
elif age >= 0 and now - st.__getattribute__("st_%s" % timestamp) >= abs(age):
return True
elif age < 0 and now - st.__getattribute__("st_%s" % timestamp) <= abs(age):
return True
return False
def sizefilter(st, size):
'''filter files greater than size'''
if size is None:
return True
elif size >= 0 and st.st_size >= abs(size):
return True
elif size < 0 and st.st_size <= abs(size):
return True
return False
def contentfilter(fsname, pattern):
"""
Filter files which contain the given expression
:arg fsname: Filename to scan for lines matching a pattern
:arg pattern: Pattern to look for inside of line
:rtype: bool
:returns: True if one of the lines in fsname matches the pattern. Otherwise False
"""
if pattern is None:
return True
prog = re.compile(pattern)
try:
with open(fsname) as f:
for line in f:
if prog.match(line):
return True
except Exception:
pass
return False
def statinfo(st):
pw_name = ""
gr_name = ""
try: # user data
pw_name = pwd.getpwuid(st.st_uid).pw_name
except Exception:
pass
try: # group data
gr_name = grp.getgrgid(st.st_gid).gr_name
except Exception:
pass
return {
'mode': "%04o" % stat.S_IMODE(st.st_mode),
'isdir': stat.S_ISDIR(st.st_mode),
'ischr': stat.S_ISCHR(st.st_mode),
'isblk': stat.S_ISBLK(st.st_mode),
'isreg': stat.S_ISREG(st.st_mode),
'isfifo': stat.S_ISFIFO(st.st_mode),
'islnk': stat.S_ISLNK(st.st_mode),
'issock': stat.S_ISSOCK(st.st_mode),
'uid': st.st_uid,
'gid': st.st_gid,
'size': st.st_size,
'inode': st.st_ino,
'dev': st.st_dev,
'nlink': st.st_nlink,
'atime': st.st_atime,
'mtime': st.st_mtime,
'ctime': st.st_ctime,
'gr_name': gr_name,
'pw_name': pw_name,
'wusr': bool(st.st_mode & stat.S_IWUSR),
'rusr': bool(st.st_mode & stat.S_IRUSR),
'xusr': bool(st.st_mode & stat.S_IXUSR),
'wgrp': bool(st.st_mode & stat.S_IWGRP),
'rgrp': bool(st.st_mode & stat.S_IRGRP),
'xgrp': bool(st.st_mode & stat.S_IXGRP),
'woth': bool(st.st_mode & stat.S_IWOTH),
'roth': bool(st.st_mode & stat.S_IROTH),
'xoth': bool(st.st_mode & stat.S_IXOTH),
'isuid': bool(st.st_mode & stat.S_ISUID),
'isgid': bool(st.st_mode & stat.S_ISGID),
}
def main():
module = AnsibleModule(
argument_spec=dict(
paths=dict(type='list', required=True, aliases=['name', 'path']),
patterns=dict(type='list', default=['*'], aliases=['pattern']),
excludes=dict(type='list', aliases=['exclude']),
contains=dict(type='str'),
file_type=dict(type='str', default="file", choices=['any', 'directory', 'file', 'link']),
age=dict(type='str'),
age_stamp=dict(type='str', default="mtime", choices=['atime', 'ctime', 'mtime']),
size=dict(type='str'),
recurse=dict(type='bool', default=False),
hidden=dict(type='bool', default=False),
follow=dict(type='bool', default=False),
get_checksum=dict(type='bool', default=False),
use_regex=dict(type='bool', default=False),
depth=dict(type='int'),
),
supports_check_mode=True,
)
params = module.params
filelist = []
if params['age'] is None:
age = None
else:
# convert age to seconds:
m = re.match(r"^(-?\d+)(s|m|h|d|w)?$", params['age'].lower())
seconds_per_unit = {"s": 1, "m": 60, "h": 3600, "d": 86400, "w": 604800}
if m:
age = int(m.group(1)) * seconds_per_unit.get(m.group(2), 1)
else:
module.fail_json(age=params['age'], msg="failed to process age")
if params['size'] is None:
size = None
else:
# convert size to bytes:
m = re.match(r"^(-?\d+)(b|k|m|g|t)?$", params['size'].lower())
bytes_per_unit = {"b": 1, "k": 1024, "m": 1024**2, "g": 1024**3, "t": 1024**4}
if m:
size = int(m.group(1)) * bytes_per_unit.get(m.group(2), 1)
else:
module.fail_json(size=params['size'], msg="failed to process size")
now = time.time()
msg = ''
looked = 0
for npath in params['paths']:
npath = os.path.expanduser(os.path.expandvars(npath))
if os.path.isdir(npath):
for root, dirs, files in os.walk(npath, followlinks=params['follow']):
if params['depth']:
depth = root.replace(npath.rstrip(os.path.sep), '').count(os.path.sep)
if files or dirs:
depth += 1
if depth > params['depth']:
continue
looked = looked + len(files) + len(dirs)
for fsobj in (files + dirs):
fsname = os.path.normpath(os.path.join(root, fsobj))
if os.path.basename(fsname).startswith('.') and not params['hidden']:
continue
try:
st = os.lstat(fsname)
except Exception:
msg += "%s was skipped as it does not seem to be a valid file or it cannot be accessed\n" % fsname
continue
r = {'path': fsname}
if params['file_type'] == 'any':
if pfilter(fsobj, params['patterns'], params['excludes'], params['use_regex']) and agefilter(st, now, age, params['age_stamp']):
r.update(statinfo(st))
if stat.S_ISREG(st.st_mode) and params['get_checksum']:
r['checksum'] = module.sha1(fsname)
filelist.append(r)
elif stat.S_ISDIR(st.st_mode) and params['file_type'] == 'directory':
if pfilter(fsobj, params['patterns'], params['excludes'], params['use_regex']) and agefilter(st, now, age, params['age_stamp']):
r.update(statinfo(st))
filelist.append(r)
elif stat.S_ISREG(st.st_mode) and params['file_type'] == 'file':
if pfilter(fsobj, params['patterns'], params['excludes'], params['use_regex']) and \
agefilter(st, now, age, params['age_stamp']) and \
sizefilter(st, size) and contentfilter(fsname, params['contains']):
r.update(statinfo(st))
if params['get_checksum']:
r['checksum'] = module.sha1(fsname)
filelist.append(r)
elif stat.S_ISLNK(st.st_mode) and params['file_type'] == 'link':
if pfilter(fsobj, params['patterns'], params['excludes'], params['use_regex']) and agefilter(st, now, age, params['age_stamp']):
r.update(statinfo(st))
filelist.append(r)
if not params['recurse']:
break
else:
msg += "%s was skipped as it does not seem to be a valid directory or it cannot be accessed\n" % npath
matched = len(filelist)
module.exit_json(files=filelist, changed=False, msg=msg, matched=matched, examined=looked)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,237 |
yum module improperly reports error on successful package removal.
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The yum module is improperly reporting an error when successfully removing packages, even when the return code is 0, and the output shown clearly indicates success.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
yum
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.3
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/ansible/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
centos-release-7-7.1908.0.el7.centos.x86_64
centos-release-7-8.2003.0.el7.centos.x86_64
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run the playbook included below, which switches from the CentOS stock kernel to the elrepo mainline kernel. The external vars file is not actually necessary within this playbook, and is part of a playbook template. The sole variable defined within is not used in this sub-playbook.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
# Migrate to elrepo mainline kernel.
---
- hosts: "{{ current_targets }}"
become: yes
gather_facts: no
vars_files:
- "external_vars.yaml"
tasks:
- name: grub_set_default - Set the grub default.
command: grub2-set-default 0
- name: set_default_kernel - Set the default kernel in sysconfig.
replace:
path: /etc/sysconfig/kernel
regexp: '^(\s*DEFAULTKERNEL\s*=\s*).*$'
replace: \1kernel-ml
backup: yes
register: changed_sysconfig
- name: remake_grub - Re-make the grub configuration, if necessary.
command: grub2-mkconfig -o /boot/grub2/grub.cfg
when: changed_sysconfig is changed
- name: install_elrepo_repository - Install the elrepo repositories.
yum:
name: http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
state: present
register: added_elrepo
- name: enable_elrepo_kernel - Enable the elrepo kernel repository, if necessary.
command: 'yum-config-manager --enable elrepo-kernel'
- name: disable_elrepo - Disable the elrepo general repository, if necessary.
command: 'yum-config-manager --disable elrepo'
- name: install_kernel_ml - Install kernel-ml package.
yum:
name: kernel-ml
state: latest
register: installed_kernel_ml
- name: adjust_sysctl - Adjust sysctl settings for elrepo kernel.
template:
src: ../templates/elrepo_sysctl.conf
dest: /etc/sysctl.d/90_aljex.conf
owner: root
group: root
mode: '0644'
register: added_sysctl_block
- name: 'reboot_server - Reboot the server.'
reboot:
msg: 'Reboot initiated by Ansible.'
connect_timeout: 5
reboot_timeout: 600
pre_reboot_delay: 0
post_reboot_delay: 30
test_command: whoami
when: added_sysctl_block is changed
- name: erase_old_packages - Erase old, unneeded kernel dev packages.
yum:
name:
- kernel-devel
- systemtap
- systemtap-devel
state: absent
register: erased_old_kernel_dev_packages
when: installed_kernel_ml is changed
- name: remove_stock_kernel - Remove the stock CentOS kernel.
yum:
name: kernel
state: absent
when: added_sysctl_block is changed
- name: 'reboot_server - Reboot the server.'
reboot:
msg: 'Reboot initiated by Ansible.'
connect_timeout: 5
reboot_timeout: 600
pre_reboot_delay: 0
post_reboot_delay: 30
test_command: whoami
when: added_sysctl_block is changed
- name: erase_rpms - Erase specific RPMs manually, rather than via yum.
command: rpm -e compat-glibc-headers glibc-headers compat-glibc glibc-devel.x86_64 glibc-devel.i686 gcc libtool libquadmath-devel gcc-gfortran gcc-c++ kernel-headers kernel-tools kernel-tools-libs
args:
warn: false
when: erased_old_kernel_dev_packages is changed
- name: install_new_kernel_dev_packages
yum:
name:
- compat-glibc-headers
- glibc-headers
- compat-glibc
- glibc-devel.x86_64
- glibc-devel.i686
- gcc
- libtool
- libquadmath-devel
- gcc-gfortran
- gcc-c++
- systemtap-devel
- systemtap
- kernel-ml-headers
- kernel-ml-devel
- kernel-ml-tools
- kernel-ml-tools-libs
- python-perf
state: latest
register: installed_new_kernel_dev_packages
- name: 'reboot_server - Reboot the server.'
reboot:
msg: 'Reboot initiated by Ansible.'
connect_timeout: 5
reboot_timeout: 600
pre_reboot_delay: 0
post_reboot_delay: 30
test_command: whoami
when: installed_new_kernel_dev_packages is changed
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
All tasks should be completed successfully, including the one which is actually succeeding, but getting flagged as a failure.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The remove_stock_kernel task erroneously gets flagged as a failure:
```
fatal: [REDACTED]: FAILED! => {"changed": false, "changes": {"removed": ["kernel"]}, "msg": "Package 'kernel' couldn't be removed!", "rc": 0, "results": ["Loaded plugins: fastestmirror\nResolving Dependencies\n--> Running transaction check\n---> Package kernel.x86_64 0:3.10.0-1062.el7 will be erased\n--> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nRemoving:\n kernel x86_64 3.10.0-1062.el7 @anaconda 64 M\n\nTransaction Summary\n================================================================================\nRemove 1 Package\n\nInstalled size: 64 M\nDownloading packages:\nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Erasing : kernel-3.10.0-1062.el7.x86_64 1/1 \n Verifying : kernel-3.10.0-1062.el7.x86_64 1/1 \n\nRemoved:\n kernel.x86_64 0:3.10.0-1062.el7
```
The rest of the playbook is never executed due to the 'error'.
I have confirmed that the kernel packages are no longer installed, and were removed successfully.
<!--- Paste verbatim command output between quotes -->
```paste below
ansible-playbook -i inventory -e 'current_targets=testing' playbooks/elrepo_kernel.yaml
```
|
https://github.com/ansible/ansible/issues/69237
|
https://github.com/ansible/ansible/pull/69592
|
f7dfa817ae6542509e0c6eb437ea7bcc51242ca2
|
4aff87770ebab4e11761f4ec3b42834cad648c09
| 2020-04-29T13:10:49Z |
python
| 2020-05-26T18:47:39Z |
lib/ansible/modules/yum.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Red Hat, Inc
# Written by Seth Vidal <skvidal at fedoraproject.org>
# Copyright: (c) 2014, Epic Games, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'core'}
DOCUMENTATION = '''
---
module: yum
version_added: historical
short_description: Manages packages with the I(yum) package manager
description:
- Installs, upgrade, downgrades, removes, and lists packages and groups with the I(yum) package manager.
- This module only works on Python 2. If you require Python 3 support see the M(dnf) module.
options:
use_backend:
description:
- This module supports C(yum) (as it always has), this is known as C(yum3)/C(YUM3)/C(yum-deprecated) by
upstream yum developers. As of Ansible 2.7+, this module also supports C(YUM4), which is the
"new yum" and it has an C(dnf) backend.
- By default, this module will select the backend based on the C(ansible_pkg_mgr) fact.
default: "auto"
choices: [ auto, yum, yum4, dnf ]
version_added: "2.7"
name:
description:
- A package name or package specifier with version, like C(name-1.0).
- If a previous version is specified, the task also needs to turn C(allow_downgrade) on.
See the C(allow_downgrade) documentation for caveats with downgrading packages.
- When using state=latest, this can be C('*') which means run C(yum -y update).
- You can also pass a url or a local path to a rpm file (using state=present).
To operate on several packages this can accept a comma separated string of packages or (as of 2.0) a list of packages.
aliases: [ pkg ]
exclude:
description:
- Package name(s) to exclude when state=present, or latest
version_added: "2.0"
list:
description:
- "Package name to run the equivalent of yum list --show-duplicates <package> against. In addition to listing packages,
use can also list the following: C(installed), C(updates), C(available) and C(repos)."
- This parameter is mutually exclusive with C(name).
state:
description:
- Whether to install (C(present) or C(installed), C(latest)), or remove (C(absent) or C(removed)) a package.
- C(present) and C(installed) will simply ensure that a desired package is installed.
- C(latest) will update the specified package if it's not of the latest available version.
- C(absent) and C(removed) will remove the specified package.
- Default is C(None), however in effect the default action is C(present) unless the C(autoremove) option is
enabled for this module, then C(absent) is inferred.
choices: [ absent, installed, latest, present, removed ]
enablerepo:
description:
- I(Repoid) of repositories to enable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a C(",").
- As of Ansible 2.7, this can alternatively be a list instead of C(",")
separated string
version_added: "0.9"
disablerepo:
description:
- I(Repoid) of repositories to disable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a C(",").
- As of Ansible 2.7, this can alternatively be a list instead of C(",")
separated string
version_added: "0.9"
conf_file:
description:
- The remote yum configuration file to use for the transaction.
version_added: "0.6"
disable_gpg_check:
description:
- Whether to disable the GPG checking of signatures of packages being
installed. Has an effect only if state is I(present) or I(latest).
type: bool
default: "no"
version_added: "1.2"
skip_broken:
description:
- Skip packages with broken dependencies(devsolve) and are causing problems.
type: bool
default: "no"
version_added: "2.3"
update_cache:
description:
- Force yum to check if cache is out of date and redownload if needed.
Has an effect only if state is I(present) or I(latest).
type: bool
default: "no"
aliases: [ expire-cache ]
version_added: "1.9"
validate_certs:
description:
- This only applies if using a https url as the source of the rpm. e.g. for localinstall. If set to C(no), the SSL certificates will not be validated.
- This should only set to C(no) used on personally controlled sites using self-signed certificates as it avoids verifying the source site.
- Prior to 2.1 the code worked as if this was set to C(yes).
type: bool
default: "yes"
version_added: "2.1"
update_only:
description:
- When using latest, only update installed packages. Do not install packages.
- Has an effect only if state is I(latest)
default: "no"
type: bool
version_added: "2.5"
installroot:
description:
- Specifies an alternative installroot, relative to which all packages
will be installed.
default: "/"
version_added: "2.3"
security:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked security related.
type: bool
default: "no"
version_added: "2.4"
bugfix:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked bugfix related.
default: "no"
version_added: "2.6"
allow_downgrade:
description:
- Specify if the named package and version is allowed to downgrade
a maybe already installed higher version of that package.
Note that setting allow_downgrade=True can make this module
behave in a non-idempotent way. The task could end up with a set
of packages that does not match the complete list of specified
packages to install (because dependencies between the downgraded
package and others can cause changes to the packages which were
in the earlier transaction).
type: bool
default: "no"
version_added: "2.4"
enable_plugin:
description:
- I(Plugin) name to enable for the install/update operation.
The enabled plugin will not persist beyond the transaction.
version_added: "2.5"
disable_plugin:
description:
- I(Plugin) name to disable for the install/update operation.
The disabled plugins will not persist beyond the transaction.
version_added: "2.5"
releasever:
description:
- Specifies an alternative release from which all packages will be
installed.
version_added: "2.7"
autoremove:
description:
- If C(yes), removes all "leaf" packages from the system that were originally
installed as dependencies of user-installed packages but which are no longer
required by any such package. Should be used alone or when state is I(absent)
- "NOTE: This feature requires yum >= 3.4.3 (RHEL/CentOS 7+)"
type: bool
default: "no"
version_added: "2.7"
disable_excludes:
description:
- Disable the excludes defined in YUM config files.
- If set to C(all), disables all excludes.
- If set to C(main), disable excludes defined in [main] in yum.conf.
- If set to C(repoid), disable excludes defined for given repo id.
version_added: "2.7"
download_only:
description:
- Only download the packages, do not install them.
default: "no"
type: bool
version_added: "2.7"
lock_timeout:
description:
- Amount of time to wait for the yum lockfile to be freed.
required: false
default: 30
type: int
version_added: "2.8"
install_weak_deps:
description:
- Will also install all packages linked by a weak dependency relation.
- "NOTE: This feature requires yum >= 4 (RHEL/CentOS 8+)"
type: bool
default: "yes"
version_added: "2.8"
download_dir:
description:
- Specifies an alternate directory to store packages.
- Has an effect only if I(download_only) is specified.
type: str
version_added: "2.8"
notes:
- When used with a `loop:` each package will be processed individually,
it is much more efficient to pass the list directly to the `name` option.
- In versions prior to 1.9.2 this module installed and removed each package
given to the yum module separately. This caused problems when packages
specified by filename or url had to be installed or removed together. In
1.9.2 this was fixed so that packages are installed in one yum
transaction. However, if one of the packages adds a new yum repository
that the other packages come from (such as epel-release) then that package
needs to be installed in a separate task. This mimics yum's command line
behaviour.
- 'Yum itself has two types of groups. "Package groups" are specified in the
rpm itself while "environment groups" are specified in a separate file
(usually by the distribution). Unfortunately, this division becomes
apparent to ansible users because ansible needs to operate on the group
of packages in a single transaction and yum requires groups to be specified
in different ways when used in that way. Package groups are specified as
"@development-tools" and environment groups are "@^gnome-desktop-environment".
Use the "yum group list hidden ids" command to see which category of group the group
you want to install falls into.'
- 'The yum module does not support clearing yum cache in an idempotent way, so it
was decided not to implement it, the only method is to use command and call the yum
command directly, namely "command: yum clean all"
https://github.com/ansible/ansible/pull/31450#issuecomment-352889579'
# informational: requirements for nodes
requirements:
- yum
author:
- Ansible Core Team
- Seth Vidal (@skvidal)
- Eduard Snesarev (@verm666)
- Berend De Schouwer (@berenddeschouwer)
- Abhijeet Kasurde (@Akasurde)
- Adam Miller (@maxamillion)
'''
EXAMPLES = '''
- name: install the latest version of Apache
yum:
name: httpd
state: latest
- name: install a list of packages (suitable replacement for 2.11 loop deprecation warning)
yum:
name:
- nginx
- postgresql
- postgresql-server
state: present
- name: install a list of packages with a list variable
yum:
name: "{{ packages }}"
vars:
packages:
- httpd
- httpd-tools
- name: remove the Apache package
yum:
name: httpd
state: absent
- name: install the latest version of Apache from the testing repo
yum:
name: httpd
enablerepo: testing
state: present
- name: install one specific version of Apache
yum:
name: httpd-2.2.29-1.4.amzn1
state: present
- name: upgrade all packages
yum:
name: '*'
state: latest
- name: upgrade all packages, excluding kernel & foo related packages
yum:
name: '*'
state: latest
exclude: kernel*,foo*
- name: install the nginx rpm from a remote repo
yum:
name: http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm
state: present
- name: install nginx rpm from a local file
yum:
name: /usr/local/src/nginx-release-centos-6-0.el6.ngx.noarch.rpm
state: present
- name: install the 'Development tools' package group
yum:
name: "@Development tools"
state: present
- name: install the 'Gnome desktop' environment group
yum:
name: "@^gnome-desktop-environment"
state: present
- name: List ansible packages and register result to print with debug later.
yum:
list: ansible
register: result
- name: Install package with multiple repos enabled
yum:
name: sos
enablerepo: "epel,ol7_latest"
- name: Install package with multiple repos disabled
yum:
name: sos
disablerepo: "epel,ol7_latest"
- name: Download the nginx package but do not install it
yum:
name:
- nginx
state: latest
download_only: true
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.urls import fetch_url
from ansible.module_utils.yumdnf import YumDnf, yumdnf_argument_spec
import errno
import os
import re
import tempfile
try:
import rpm
HAS_RPM_PYTHON = True
except ImportError:
HAS_RPM_PYTHON = False
try:
import yum
HAS_YUM_PYTHON = True
except ImportError:
HAS_YUM_PYTHON = False
try:
from yum.misc import find_unfinished_transactions, find_ts_remaining
from rpmUtils.miscutils import splitFilename, compareEVR
transaction_helpers = True
except ImportError:
transaction_helpers = False
from contextlib import contextmanager
from ansible.module_utils.urls import fetch_file
def_qf = "%{epoch}:%{name}-%{version}-%{release}.%{arch}"
rpmbin = None
class YumModule(YumDnf):
"""
Yum Ansible module back-end implementation
"""
def __init__(self, module):
# state=installed name=pkgspec
# state=removed name=pkgspec
# state=latest name=pkgspec
#
# informational commands:
# list=installed
# list=updates
# list=available
# list=repos
# list=pkgspec
# This populates instance vars for all argument spec params
super(YumModule, self).__init__(module)
self.pkg_mgr_name = "yum"
self.lockfile = '/var/run/yum.pid'
self._yum_base = None
def _enablerepos_with_error_checking(self):
# NOTE: This seems unintuitive, but it mirrors yum's CLI behavior
if len(self.enablerepo) == 1:
try:
self.yum_base.repos.enableRepo(self.enablerepo[0])
except yum.Errors.YumBaseError as e:
if u'repository not found' in to_text(e):
self.module.fail_json(msg="Repository %s not found." % self.enablerepo[0])
else:
raise e
else:
for rid in self.enablerepo:
try:
self.yum_base.repos.enableRepo(rid)
except yum.Errors.YumBaseError as e:
if u'repository not found' in to_text(e):
self.module.warn("Repository %s not found." % rid)
else:
raise e
def is_lockfile_pid_valid(self):
try:
try:
with open(self.lockfile, 'r') as f:
oldpid = int(f.readline())
except ValueError:
# invalid data
os.unlink(self.lockfile)
return False
if oldpid == os.getpid():
# that's us?
os.unlink(self.lockfile)
return False
try:
with open("/proc/%d/stat" % oldpid, 'r') as f:
stat = f.readline()
if stat.split()[2] == 'Z':
# Zombie
os.unlink(self.lockfile)
return False
except IOError:
# either /proc is not mounted or the process is already dead
try:
# check the state of the process
os.kill(oldpid, 0)
except OSError as e:
if e.errno == errno.ESRCH:
# No such process
os.unlink(self.lockfile)
return False
self.module.fail_json(msg="Unable to check PID %s in %s: %s" % (oldpid, self.lockfile, to_native(e)))
except (IOError, OSError) as e:
# lockfile disappeared?
return False
# another copy seems to be running
return True
@property
def yum_base(self):
if self._yum_base:
return self._yum_base
else:
# Only init once
self._yum_base = yum.YumBase()
self._yum_base.preconf.debuglevel = 0
self._yum_base.preconf.errorlevel = 0
self._yum_base.preconf.plugins = True
self._yum_base.preconf.enabled_plugins = self.enable_plugin
self._yum_base.preconf.disabled_plugins = self.disable_plugin
if self.releasever:
self._yum_base.preconf.releasever = self.releasever
if self.installroot != '/':
# do not setup installroot by default, because of error
# CRITICAL:yum.cli:Config Error: Error accessing file for config file:////etc/yum.conf
# in old yum version (like in CentOS 6.6)
self._yum_base.preconf.root = self.installroot
self._yum_base.conf.installroot = self.installroot
if self.conf_file and os.path.exists(self.conf_file):
self._yum_base.preconf.fn = self.conf_file
if os.geteuid() != 0:
if hasattr(self._yum_base, 'setCacheDir'):
self._yum_base.setCacheDir()
else:
cachedir = yum.misc.getCacheDir()
self._yum_base.repos.setCacheDir(cachedir)
self._yum_base.conf.cache = 0
if self.disable_excludes:
self._yum_base.conf.disable_excludes = self.disable_excludes
# A sideeffect of accessing conf is that the configuration is
# loaded and plugins are discovered
self.yum_base.conf
try:
for rid in self.disablerepo:
self.yum_base.repos.disableRepo(rid)
self._enablerepos_with_error_checking()
except Exception as e:
self.module.fail_json(msg="Failure talking to yum: %s" % to_native(e))
return self._yum_base
def po_to_envra(self, po):
if hasattr(po, 'ui_envra'):
return po.ui_envra
return '%s:%s-%s-%s.%s' % (po.epoch, po.name, po.version, po.release, po.arch)
def is_group_env_installed(self, name):
name_lower = name.lower()
if yum.__version_info__ >= (3, 4):
groups_list = self.yum_base.doGroupLists(return_evgrps=True)
else:
groups_list = self.yum_base.doGroupLists()
# list of the installed groups on the first index
groups = groups_list[0]
for group in groups:
if name_lower.endswith(group.name.lower()) or name_lower.endswith(group.groupid.lower()):
return True
if yum.__version_info__ >= (3, 4):
# list of the installed env_groups on the third index
envs = groups_list[2]
for env in envs:
if name_lower.endswith(env.name.lower()) or name_lower.endswith(env.environmentid.lower()):
return True
return False
def is_installed(self, repoq, pkgspec, qf=None, is_pkg=False):
if qf is None:
qf = "%{epoch}:%{name}-%{version}-%{release}.%{arch}\n"
if not repoq:
pkgs = []
try:
e, m, _ = self.yum_base.rpmdb.matchPackageNames([pkgspec])
pkgs = e + m
if not pkgs and not is_pkg:
pkgs.extend(self.yum_base.returnInstalledPackagesByDep(pkgspec))
except Exception as e:
self.module.fail_json(msg="Failure talking to yum: %s" % to_native(e))
return [self.po_to_envra(p) for p in pkgs]
else:
global rpmbin
if not rpmbin:
rpmbin = self.module.get_bin_path('rpm', required=True)
cmd = [rpmbin, '-q', '--qf', qf, pkgspec]
if self.installroot != '/':
cmd.extend(['--root', self.installroot])
# rpm localizes messages and we're screen scraping so make sure we use
# the C locale
lang_env = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C')
rc, out, err = self.module.run_command(cmd, environ_update=lang_env)
if rc != 0 and 'is not installed' not in out:
self.module.fail_json(msg='Error from rpm: %s: %s' % (cmd, err))
if 'is not installed' in out:
out = ''
pkgs = [p for p in out.replace('(none)', '0').split('\n') if p.strip()]
if not pkgs and not is_pkg:
cmd = [rpmbin, '-q', '--qf', qf, '--whatprovides', pkgspec]
if self.installroot != '/':
cmd.extend(['--root', self.installroot])
rc2, out2, err2 = self.module.run_command(cmd, environ_update=lang_env)
else:
rc2, out2, err2 = (0, '', '')
if rc2 != 0 and 'no package provides' not in out2:
self.module.fail_json(msg='Error from rpm: %s: %s' % (cmd, err + err2))
if 'no package provides' in out2:
out2 = ''
pkgs += [p for p in out2.replace('(none)', '0').split('\n') if p.strip()]
return pkgs
return []
def is_available(self, repoq, pkgspec, qf=def_qf):
if not repoq:
pkgs = []
try:
e, m, _ = self.yum_base.pkgSack.matchPackageNames([pkgspec])
pkgs = e + m
if not pkgs:
pkgs.extend(self.yum_base.returnPackagesByDep(pkgspec))
except Exception as e:
self.module.fail_json(msg="Failure talking to yum: %s" % to_native(e))
return [self.po_to_envra(p) for p in pkgs]
else:
myrepoq = list(repoq)
r_cmd = ['--disablerepo', ','.join(self.disablerepo)]
myrepoq.extend(r_cmd)
r_cmd = ['--enablerepo', ','.join(self.enablerepo)]
myrepoq.extend(r_cmd)
if self.releasever:
myrepoq.extend('--releasever=%s' % self.releasever)
cmd = myrepoq + ["--qf", qf, pkgspec]
rc, out, err = self.module.run_command(cmd)
if rc == 0:
return [p for p in out.split('\n') if p.strip()]
else:
self.module.fail_json(msg='Error from repoquery: %s: %s' % (cmd, err))
return []
def is_update(self, repoq, pkgspec, qf=def_qf):
if not repoq:
pkgs = []
updates = []
try:
pkgs = self.yum_base.returnPackagesByDep(pkgspec) + \
self.yum_base.returnInstalledPackagesByDep(pkgspec)
if not pkgs:
e, m, _ = self.yum_base.pkgSack.matchPackageNames([pkgspec])
pkgs = e + m
updates = self.yum_base.doPackageLists(pkgnarrow='updates').updates
except Exception as e:
self.module.fail_json(msg="Failure talking to yum: %s" % to_native(e))
retpkgs = (pkg for pkg in pkgs if pkg in updates)
return set(self.po_to_envra(p) for p in retpkgs)
else:
myrepoq = list(repoq)
r_cmd = ['--disablerepo', ','.join(self.disablerepo)]
myrepoq.extend(r_cmd)
r_cmd = ['--enablerepo', ','.join(self.enablerepo)]
myrepoq.extend(r_cmd)
if self.releasever:
myrepoq.extend('--releasever=%s' % self.releasever)
cmd = myrepoq + ["--pkgnarrow=updates", "--qf", qf, pkgspec]
rc, out, err = self.module.run_command(cmd)
if rc == 0:
return set(p for p in out.split('\n') if p.strip())
else:
self.module.fail_json(msg='Error from repoquery: %s: %s' % (cmd, err))
return set()
def what_provides(self, repoq, req_spec, qf=def_qf):
if not repoq:
pkgs = []
try:
try:
pkgs = self.yum_base.returnPackagesByDep(req_spec) + \
self.yum_base.returnInstalledPackagesByDep(req_spec)
except Exception as e:
# If a repo with `repo_gpgcheck=1` is added and the repo GPG
# key was never accepted, querying this repo will throw an
# error: 'repomd.xml signature could not be verified'. In that
# situation we need to run `yum -y makecache` which will accept
# the key and try again.
if 'repomd.xml signature could not be verified' in to_native(e):
if self.releasever:
self.module.run_command(self.yum_basecmd + ['makecache'] + ['--releasever=%s' % self.releasever])
else:
self.module.run_command(self.yum_basecmd + ['makecache'])
pkgs = self.yum_base.returnPackagesByDep(req_spec) + \
self.yum_base.returnInstalledPackagesByDep(req_spec)
else:
raise
if not pkgs:
e, m, _ = self.yum_base.pkgSack.matchPackageNames([req_spec])
pkgs.extend(e)
pkgs.extend(m)
e, m, _ = self.yum_base.rpmdb.matchPackageNames([req_spec])
pkgs.extend(e)
pkgs.extend(m)
except Exception as e:
self.module.fail_json(msg="Failure talking to yum: %s" % to_native(e))
return set(self.po_to_envra(p) for p in pkgs)
else:
myrepoq = list(repoq)
r_cmd = ['--disablerepo', ','.join(self.disablerepo)]
myrepoq.extend(r_cmd)
r_cmd = ['--enablerepo', ','.join(self.enablerepo)]
myrepoq.extend(r_cmd)
if self.releasever:
myrepoq.extend('--releasever=%s' % self.releasever)
cmd = myrepoq + ["--qf", qf, "--whatprovides", req_spec]
rc, out, err = self.module.run_command(cmd)
cmd = myrepoq + ["--qf", qf, req_spec]
rc2, out2, err2 = self.module.run_command(cmd)
if rc == 0 and rc2 == 0:
out += out2
pkgs = set([p for p in out.split('\n') if p.strip()])
if not pkgs:
pkgs = self.is_installed(repoq, req_spec, qf=qf)
return pkgs
else:
self.module.fail_json(msg='Error from repoquery: %s: %s' % (cmd, err + err2))
return set()
def transaction_exists(self, pkglist):
"""
checks the package list to see if any packages are
involved in an incomplete transaction
"""
conflicts = []
if not transaction_helpers:
return conflicts
# first, we create a list of the package 'nvreas'
# so we can compare the pieces later more easily
pkglist_nvreas = (splitFilename(pkg) for pkg in pkglist)
# next, we build the list of packages that are
# contained within an unfinished transaction
unfinished_transactions = find_unfinished_transactions()
for trans in unfinished_transactions:
steps = find_ts_remaining(trans)
for step in steps:
# the action is install/erase/etc., but we only
# care about the package spec contained in the step
(action, step_spec) = step
(n, v, r, e, a) = splitFilename(step_spec)
# and see if that spec is in the list of packages
# requested for installation/updating
for pkg in pkglist_nvreas:
# if the name and arch match, we're going to assume
# this package is part of a pending transaction
# the label is just for display purposes
label = "%s-%s" % (n, a)
if n == pkg[0] and a == pkg[4]:
if label not in conflicts:
conflicts.append("%s-%s" % (n, a))
break
return conflicts
def local_envra(self, path):
"""return envra of a local rpm passed in"""
ts = rpm.TransactionSet()
ts.setVSFlags(rpm._RPMVSF_NOSIGNATURES)
fd = os.open(path, os.O_RDONLY)
try:
header = ts.hdrFromFdno(fd)
except rpm.error as e:
return None
finally:
os.close(fd)
return '%s:%s-%s-%s.%s' % (
header[rpm.RPMTAG_EPOCH] or '0',
header[rpm.RPMTAG_NAME],
header[rpm.RPMTAG_VERSION],
header[rpm.RPMTAG_RELEASE],
header[rpm.RPMTAG_ARCH]
)
@contextmanager
def set_env_proxy(self):
# setting system proxy environment and saving old, if exists
namepass = ""
scheme = ["http", "https"]
old_proxy_env = [os.getenv("http_proxy"), os.getenv("https_proxy")]
try:
# "_none_" is a special value to disable proxy in yum.conf/*.repo
if self.yum_base.conf.proxy and self.yum_base.conf.proxy not in ("_none_",):
if self.yum_base.conf.proxy_username:
namepass = namepass + self.yum_base.conf.proxy_username
proxy_url = self.yum_base.conf.proxy
if self.yum_base.conf.proxy_password:
namepass = namepass + ":" + self.yum_base.conf.proxy_password
elif '@' in self.yum_base.conf.proxy:
namepass = self.yum_base.conf.proxy.split('@')[0].split('//')[-1]
proxy_url = self.yum_base.conf.proxy.replace("{0}@".format(namepass), "")
if namepass:
namepass = namepass + '@'
for item in scheme:
os.environ[item + "_proxy"] = re.sub(
r"(http://)",
r"\g<1>" + namepass, proxy_url
)
else:
for item in scheme:
os.environ[item + "_proxy"] = self.yum_base.conf.proxy
yield
except yum.Errors.YumBaseError:
raise
finally:
# revert back to previously system configuration
for item in scheme:
if os.getenv("{0}_proxy".format(item)):
del os.environ["{0}_proxy".format(item)]
if old_proxy_env[0]:
os.environ["http_proxy"] = old_proxy_env[0]
if old_proxy_env[1]:
os.environ["https_proxy"] = old_proxy_env[1]
def pkg_to_dict(self, pkgstr):
if pkgstr.strip() and pkgstr.count('|') == 5:
n, e, v, r, a, repo = pkgstr.split('|')
else:
return {'error_parsing': pkgstr}
d = {
'name': n,
'arch': a,
'epoch': e,
'release': r,
'version': v,
'repo': repo,
'envra': '%s:%s-%s-%s.%s' % (e, n, v, r, a)
}
if repo == 'installed':
d['yumstate'] = 'installed'
else:
d['yumstate'] = 'available'
return d
def repolist(self, repoq, qf="%{repoid}"):
cmd = repoq + ["--qf", qf, "-a"]
if self.releasever:
cmd.extend(['--releasever=%s' % self.releasever])
rc, out, _ = self.module.run_command(cmd)
if rc == 0:
return set(p for p in out.split('\n') if p.strip())
else:
return []
def list_stuff(self, repoquerybin, stuff):
qf = "%{name}|%{epoch}|%{version}|%{release}|%{arch}|%{repoid}"
# is_installed goes through rpm instead of repoquery so it needs a slightly different format
is_installed_qf = "%{name}|%{epoch}|%{version}|%{release}|%{arch}|installed\n"
repoq = [repoquerybin, '--show-duplicates', '--plugins', '--quiet']
if self.disablerepo:
repoq.extend(['--disablerepo', ','.join(self.disablerepo)])
if self.enablerepo:
repoq.extend(['--enablerepo', ','.join(self.enablerepo)])
if self.installroot != '/':
repoq.extend(['--installroot', self.installroot])
if self.conf_file and os.path.exists(self.conf_file):
repoq += ['-c', self.conf_file]
if stuff == 'installed':
return [self.pkg_to_dict(p) for p in sorted(self.is_installed(repoq, '-a', qf=is_installed_qf)) if p.strip()]
if stuff == 'updates':
return [self.pkg_to_dict(p) for p in sorted(self.is_update(repoq, '-a', qf=qf)) if p.strip()]
if stuff == 'available':
return [self.pkg_to_dict(p) for p in sorted(self.is_available(repoq, '-a', qf=qf)) if p.strip()]
if stuff == 'repos':
return [dict(repoid=name, state='enabled') for name in sorted(self.repolist(repoq)) if name.strip()]
return [
self.pkg_to_dict(p) for p in
sorted(self.is_installed(repoq, stuff, qf=is_installed_qf) + self.is_available(repoq, stuff, qf=qf))
if p.strip()
]
def exec_install(self, items, action, pkgs, res):
cmd = self.yum_basecmd + [action] + pkgs
if self.releasever:
cmd.extend(['--releasever=%s' % self.releasever])
if self.module.check_mode:
self.module.exit_json(changed=True, results=res['results'], changes=dict(installed=pkgs))
else:
res['changes'] = dict(installed=pkgs)
lang_env = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C')
rc, out, err = self.module.run_command(cmd, environ_update=lang_env)
if rc == 1:
for spec in items:
# Fail on invalid urls:
if ('://' in spec and ('No package %s available.' % spec in out or 'Cannot open: %s. Skipping.' % spec in err)):
err = 'Package at %s could not be installed' % spec
self.module.fail_json(changed=False, msg=err, rc=rc)
res['rc'] = rc
res['results'].append(out)
res['msg'] += err
res['changed'] = True
if ('Nothing to do' in out and rc == 0) or ('does not have any packages' in err):
res['changed'] = False
if rc != 0:
res['changed'] = False
self.module.fail_json(**res)
# Fail if yum prints 'No space left on device' because that means some
# packages failed executing their post install scripts because of lack of
# free space (e.g. kernel package couldn't generate initramfs). Note that
# yum can still exit with rc=0 even if some post scripts didn't execute
# correctly.
if 'No space left on device' in (out or err):
res['changed'] = False
res['msg'] = 'No space left on device'
self.module.fail_json(**res)
# FIXME - if we did an install - go and check the rpmdb to see if it actually installed
# look for each pkg in rpmdb
# look for each pkg via obsoletes
return res
def install(self, items, repoq):
pkgs = []
downgrade_pkgs = []
res = {}
res['results'] = []
res['msg'] = ''
res['rc'] = 0
res['changed'] = False
for spec in items:
pkg = None
downgrade_candidate = False
# check if pkgspec is installed (if possible for idempotence)
if spec.endswith('.rpm') or '://' in spec:
if '://' not in spec and not os.path.exists(spec):
res['msg'] += "No RPM file matching '%s' found on system" % spec
res['results'].append("No RPM file matching '%s' found on system" % spec)
res['rc'] = 127 # Ensure the task fails in with-loop
self.module.fail_json(**res)
if '://' in spec:
with self.set_env_proxy():
package = fetch_file(self.module, spec)
if not package.endswith('.rpm'):
# yum requires a local file to have the extension of .rpm and we
# can not guarantee that from an URL (redirects, proxies, etc)
new_package_path = '%s.rpm' % package
os.rename(package, new_package_path)
package = new_package_path
else:
package = spec
# most common case is the pkg is already installed
envra = self.local_envra(package)
if envra is None:
self.module.fail_json(msg="Failed to get nevra information from RPM package: %s" % spec)
installed_pkgs = self.is_installed(repoq, envra)
if installed_pkgs:
res['results'].append('%s providing %s is already installed' % (installed_pkgs[0], package))
continue
(name, ver, rel, epoch, arch) = splitFilename(envra)
installed_pkgs = self.is_installed(repoq, name)
# case for two same envr but different archs like x86_64 and i686
if len(installed_pkgs) == 2:
(cur_name0, cur_ver0, cur_rel0, cur_epoch0, cur_arch0) = splitFilename(installed_pkgs[0])
(cur_name1, cur_ver1, cur_rel1, cur_epoch1, cur_arch1) = splitFilename(installed_pkgs[1])
cur_epoch0 = cur_epoch0 or '0'
cur_epoch1 = cur_epoch1 or '0'
compare = compareEVR((cur_epoch0, cur_ver0, cur_rel0), (cur_epoch1, cur_ver1, cur_rel1))
if compare == 0 and cur_arch0 != cur_arch1:
for installed_pkg in installed_pkgs:
if installed_pkg.endswith(arch):
installed_pkgs = [installed_pkg]
if len(installed_pkgs) == 1:
installed_pkg = installed_pkgs[0]
(cur_name, cur_ver, cur_rel, cur_epoch, cur_arch) = splitFilename(installed_pkg)
cur_epoch = cur_epoch or '0'
compare = compareEVR((cur_epoch, cur_ver, cur_rel), (epoch, ver, rel))
# compare > 0 -> higher version is installed
# compare == 0 -> exact version is installed
# compare < 0 -> lower version is installed
if compare > 0 and self.allow_downgrade:
downgrade_candidate = True
elif compare >= 0:
continue
# else: if there are more installed packages with the same name, that would mean
# kernel, gpg-pubkey or like, so just let yum deal with it and try to install it
pkg = package
# groups
elif spec.startswith('@'):
if self.is_group_env_installed(spec):
continue
pkg = spec
# range requires or file-requires or pkgname :(
else:
# most common case is the pkg is already installed and done
# short circuit all the bs - and search for it as a pkg in is_installed
# if you find it then we're done
if not set(['*', '?']).intersection(set(spec)):
installed_pkgs = self.is_installed(repoq, spec, is_pkg=True)
if installed_pkgs:
res['results'].append('%s providing %s is already installed' % (installed_pkgs[0], spec))
continue
# look up what pkgs provide this
pkglist = self.what_provides(repoq, spec)
if not pkglist:
res['msg'] += "No package matching '%s' found available, installed or updated" % spec
res['results'].append("No package matching '%s' found available, installed or updated" % spec)
res['rc'] = 126 # Ensure the task fails in with-loop
self.module.fail_json(**res)
# if any of the packages are involved in a transaction, fail now
# so that we don't hang on the yum operation later
conflicts = self.transaction_exists(pkglist)
if conflicts:
res['msg'] += "The following packages have pending transactions: %s" % ", ".join(conflicts)
res['rc'] = 125 # Ensure the task fails in with-loop
self.module.fail_json(**res)
# if any of them are installed
# then nothing to do
found = False
for this in pkglist:
if self.is_installed(repoq, this, is_pkg=True):
found = True
res['results'].append('%s providing %s is already installed' % (this, spec))
break
# if the version of the pkg you have installed is not in ANY repo, but there are
# other versions in the repos (both higher and lower) then the previous checks won't work.
# so we check one more time. This really only works for pkgname - not for file provides or virt provides
# but virt provides should be all caught in what_provides on its own.
# highly irritating
if not found:
if self.is_installed(repoq, spec):
found = True
res['results'].append('package providing %s is already installed' % (spec))
if found:
continue
# Downgrade - The yum install command will only install or upgrade to a spec version, it will
# not install an older version of an RPM even if specified by the install spec. So we need to
# determine if this is a downgrade, and then use the yum downgrade command to install the RPM.
if self.allow_downgrade:
for package in pkglist:
# Get the NEVRA of the requested package using pkglist instead of spec because pkglist
# contains consistently-formatted package names returned by yum, rather than user input
# that is often not parsed correctly by splitFilename().
(name, ver, rel, epoch, arch) = splitFilename(package)
# Check if any version of the requested package is installed
inst_pkgs = self.is_installed(repoq, name, is_pkg=True)
if inst_pkgs:
(cur_name, cur_ver, cur_rel, cur_epoch, cur_arch) = splitFilename(inst_pkgs[0])
compare = compareEVR((cur_epoch, cur_ver, cur_rel), (epoch, ver, rel))
if compare > 0:
downgrade_candidate = True
else:
downgrade_candidate = False
break
# If package needs to be installed/upgraded/downgraded, then pass in the spec
# we could get here if nothing provides it but that's not
# the error we're catching here
pkg = spec
if downgrade_candidate and self.allow_downgrade:
downgrade_pkgs.append(pkg)
else:
pkgs.append(pkg)
if downgrade_pkgs:
res = self.exec_install(items, 'downgrade', downgrade_pkgs, res)
if pkgs:
res = self.exec_install(items, 'install', pkgs, res)
return res
def remove(self, items, repoq):
pkgs = []
res = {}
res['results'] = []
res['msg'] = ''
res['changed'] = False
res['rc'] = 0
for pkg in items:
if pkg.startswith('@'):
installed = self.is_group_env_installed(pkg)
else:
installed = self.is_installed(repoq, pkg)
if installed:
pkgs.append(pkg)
else:
res['results'].append('%s is not installed' % pkg)
if pkgs:
if self.module.check_mode:
self.module.exit_json(changed=True, results=res['results'], changes=dict(removed=pkgs))
else:
res['changes'] = dict(removed=pkgs)
# run an actual yum transaction
if self.autoremove:
cmd = self.yum_basecmd + ["autoremove"] + pkgs
else:
cmd = self.yum_basecmd + ["remove"] + pkgs
rc, out, err = self.module.run_command(cmd)
res['rc'] = rc
res['results'].append(out)
res['msg'] = err
if rc != 0:
if self.autoremove and 'No such command' in out:
self.module.fail_json(msg='Version of YUM too old for autoremove: Requires yum 3.4.3 (RHEL/CentOS 7+)')
else:
self.module.fail_json(**res)
# compile the results into one batch. If anything is changed
# then mark changed
# at the end - if we've end up failed then fail out of the rest
# of the process
# at this point we check to see if the pkg is no longer present
self._yum_base = None # previous YumBase package index is now invalid
for pkg in pkgs:
if pkg.startswith('@'):
installed = self.is_group_env_installed(pkg)
else:
installed = self.is_installed(repoq, pkg)
if installed:
# Return a message so it's obvious to the user why yum failed
# and which package couldn't be removed. More details:
# https://github.com/ansible/ansible/issues/35672
res['msg'] = "Package '%s' couldn't be removed!" % pkg
self.module.fail_json(**res)
res['changed'] = True
return res
def run_check_update(self):
# run check-update to see if we have packages pending
if self.releasever:
rc, out, err = self.module.run_command(self.yum_basecmd + ['check-update'] + ['--releasever=%s' % self.releasever])
else:
rc, out, err = self.module.run_command(self.yum_basecmd + ['check-update'])
return rc, out, err
@staticmethod
def parse_check_update(check_update_output):
updates = {}
obsoletes = {}
# remove incorrect new lines in longer columns in output from yum check-update
# yum line wrapping can move the repo to the next line
#
# Meant to filter out sets of lines like:
# some_looooooooooooooooooooooooooooooooooooong_package_name 1:1.2.3-1.el7
# some-repo-label
#
# But it also needs to avoid catching lines like:
# Loading mirror speeds from cached hostfile
#
# ceph.x86_64 1:11.2.0-0.el7 ceph
# preprocess string and filter out empty lines so the regex below works
out = re.sub(r'\n[^\w]\W+(.*)', r' \1', check_update_output)
available_updates = out.split('\n')
# build update dictionary
for line in available_updates:
line = line.split()
# ignore irrelevant lines
# '*' in line matches lines like mirror lists:
# * base: mirror.corbina.net
# len(line) != 3 or 6 could be junk or a continuation
# len(line) = 6 is package obsoletes
#
# FIXME: what is the '.' not in line conditional for?
if '*' in line or len(line) not in [3, 6] or '.' not in line[0]:
continue
pkg, version, repo = line[0], line[1], line[2]
name, dist = pkg.rsplit('.', 1)
updates.update({name: {'version': version, 'dist': dist, 'repo': repo}})
if len(line) == 6:
obsolete_pkg, obsolete_version, obsolete_repo = line[3], line[4], line[5]
obsolete_name, obsolete_dist = obsolete_pkg.rsplit('.', 1)
obsoletes.update({obsolete_name: {'version': obsolete_version, 'dist': obsolete_dist, 'repo': obsolete_repo}})
return updates, obsoletes
def latest(self, items, repoq):
res = {}
res['results'] = []
res['msg'] = ''
res['changed'] = False
res['rc'] = 0
pkgs = {}
pkgs['update'] = []
pkgs['install'] = []
updates = {}
obsoletes = {}
update_all = False
cmd = None
# determine if we're doing an update all
if '*' in items:
update_all = True
rc, out, err = self.run_check_update()
if rc == 0 and update_all:
res['results'].append('Nothing to do here, all packages are up to date')
return res
elif rc == 100:
updates, obsoletes = self.parse_check_update(out)
elif rc == 1:
res['msg'] = err
res['rc'] = rc
self.module.fail_json(**res)
if update_all:
cmd = self.yum_basecmd + ['update']
will_update = set(updates.keys())
will_update_from_other_package = dict()
else:
will_update = set()
will_update_from_other_package = dict()
for spec in items:
# some guess work involved with groups. update @<group> will install the group if missing
if spec.startswith('@'):
pkgs['update'].append(spec)
will_update.add(spec)
continue
# check if pkgspec is installed (if possible for idempotence)
# localpkg
if spec.endswith('.rpm') and '://' not in spec:
if not os.path.exists(spec):
res['msg'] += "No RPM file matching '%s' found on system" % spec
res['results'].append("No RPM file matching '%s' found on system" % spec)
res['rc'] = 127 # Ensure the task fails in with-loop
self.module.fail_json(**res)
# get the pkg e:name-v-r.arch
envra = self.local_envra(spec)
if envra is None:
self.module.fail_json(msg="Failed to get nevra information from RPM package: %s" % spec)
# local rpm files can't be updated
if self.is_installed(repoq, envra):
pkgs['update'].append(spec)
else:
pkgs['install'].append(spec)
continue
# URL
if '://' in spec:
# download package so that we can check if it's already installed
with self.set_env_proxy():
package = fetch_file(self.module, spec)
envra = self.local_envra(package)
if envra is None:
self.module.fail_json(msg="Failed to get nevra information from RPM package: %s" % spec)
# local rpm files can't be updated
if self.is_installed(repoq, envra):
pkgs['update'].append(spec)
else:
pkgs['install'].append(spec)
continue
# dep/pkgname - find it
if self.is_installed(repoq, spec):
pkgs['update'].append(spec)
else:
pkgs['install'].append(spec)
pkglist = self.what_provides(repoq, spec)
# FIXME..? may not be desirable to throw an exception here if a single package is missing
if not pkglist:
res['msg'] += "No package matching '%s' found available, installed or updated" % spec
res['results'].append("No package matching '%s' found available, installed or updated" % spec)
res['rc'] = 126 # Ensure the task fails in with-loop
self.module.fail_json(**res)
nothing_to_do = True
for pkg in pkglist:
if spec in pkgs['install'] and self.is_available(repoq, pkg):
nothing_to_do = False
break
# this contains the full NVR and spec could contain wildcards
# or virtual provides (like "python-*" or "smtp-daemon") while
# updates contains name only.
pkgname, _, _, _, _ = splitFilename(pkg)
if spec in pkgs['update'] and pkgname in updates:
nothing_to_do = False
will_update.add(spec)
# Massage the updates list
if spec != pkgname:
# For reporting what packages would be updated more
# succinctly
will_update_from_other_package[spec] = pkgname
break
if not self.is_installed(repoq, spec) and self.update_only:
res['results'].append("Packages providing %s not installed due to update_only specified" % spec)
continue
if nothing_to_do:
res['results'].append("All packages providing %s are up to date" % spec)
continue
# if any of the packages are involved in a transaction, fail now
# so that we don't hang on the yum operation later
conflicts = self.transaction_exists(pkglist)
if conflicts:
res['msg'] += "The following packages have pending transactions: %s" % ", ".join(conflicts)
res['results'].append("The following packages have pending transactions: %s" % ", ".join(conflicts))
res['rc'] = 128 # Ensure the task fails in with-loop
self.module.fail_json(**res)
# check_mode output
to_update = []
for w in will_update:
if w.startswith('@'):
to_update.append((w, None))
elif w not in updates:
other_pkg = will_update_from_other_package[w]
to_update.append(
(
w,
'because of (at least) %s-%s.%s from %s' % (
other_pkg,
updates[other_pkg]['version'],
updates[other_pkg]['dist'],
updates[other_pkg]['repo']
)
)
)
else:
to_update.append((w, '%s.%s from %s' % (updates[w]['version'], updates[w]['dist'], updates[w]['repo'])))
if self.update_only:
res['changes'] = dict(installed=[], updated=to_update)
else:
res['changes'] = dict(installed=pkgs['install'], updated=to_update)
if obsoletes:
res['obsoletes'] = obsoletes
# return results before we actually execute stuff
if self.module.check_mode:
if will_update or pkgs['install']:
res['changed'] = True
return res
if self.releasever:
cmd.extend(['--releasever=%s' % self.releasever])
# run commands
if cmd: # update all
rc, out, err = self.module.run_command(cmd)
res['changed'] = True
elif self.update_only:
if pkgs['update']:
cmd = self.yum_basecmd + ['update'] + pkgs['update']
lang_env = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C')
rc, out, err = self.module.run_command(cmd, environ_update=lang_env)
out_lower = out.strip().lower()
if not out_lower.endswith("no packages marked for update") and \
not out_lower.endswith("nothing to do"):
res['changed'] = True
else:
rc, out, err = [0, '', '']
elif pkgs['install'] or will_update and not self.update_only:
cmd = self.yum_basecmd + ['install'] + pkgs['install'] + pkgs['update']
lang_env = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C')
rc, out, err = self.module.run_command(cmd, environ_update=lang_env)
out_lower = out.strip().lower()
if not out_lower.endswith("no packages marked for update") and \
not out_lower.endswith("nothing to do"):
res['changed'] = True
else:
rc, out, err = [0, '', '']
res['rc'] = rc
res['msg'] += err
res['results'].append(out)
if rc:
res['failed'] = True
return res
def ensure(self, repoq):
pkgs = self.names
# autoremove was provided without `name`
if not self.names and self.autoremove:
pkgs = []
self.state = 'absent'
if self.conf_file and os.path.exists(self.conf_file):
self.yum_basecmd += ['-c', self.conf_file]
if repoq:
repoq += ['-c', self.conf_file]
if self.skip_broken:
self.yum_basecmd.extend(['--skip-broken'])
if self.disablerepo:
self.yum_basecmd.extend(['--disablerepo=%s' % ','.join(self.disablerepo)])
if self.enablerepo:
self.yum_basecmd.extend(['--enablerepo=%s' % ','.join(self.enablerepo)])
if self.enable_plugin:
self.yum_basecmd.extend(['--enableplugin', ','.join(self.enable_plugin)])
if self.disable_plugin:
self.yum_basecmd.extend(['--disableplugin', ','.join(self.disable_plugin)])
if self.exclude:
e_cmd = ['--exclude=%s' % ','.join(self.exclude)]
self.yum_basecmd.extend(e_cmd)
if self.disable_excludes:
self.yum_basecmd.extend(['--disableexcludes=%s' % self.disable_excludes])
if self.download_only:
self.yum_basecmd.extend(['--downloadonly'])
if self.download_dir:
self.yum_basecmd.extend(['--downloaddir=%s' % self.download_dir])
if self.releasever:
self.yum_basecmd.extend(['--releasever=%s' % self.releasever])
if self.installroot != '/':
# do not setup installroot by default, because of error
# CRITICAL:yum.cli:Config Error: Error accessing file for config file:////etc/yum.conf
# in old yum version (like in CentOS 6.6)
e_cmd = ['--installroot=%s' % self.installroot]
self.yum_basecmd.extend(e_cmd)
if self.state in ('installed', 'present', 'latest'):
""" The need of this entire if conditional has to be changed
this function is the ensure function that is called
in the main section.
This conditional tends to disable/enable repo for
install present latest action, same actually
can be done for remove and absent action
As solution I would advice to cal
try: self.yum_base.repos.disableRepo(disablerepo)
and
try: self.yum_base.repos.enableRepo(enablerepo)
right before any yum_cmd is actually called regardless
of yum action.
Please note that enable/disablerepo options are general
options, this means that we can call those with any action
option. https://linux.die.net/man/8/yum
This docstring will be removed together when issue: #21619
will be solved.
This has been triggered by: #19587
"""
if self.update_cache:
self.module.run_command(self.yum_basecmd + ['clean', 'expire-cache'])
try:
current_repos = self.yum_base.repos.repos.keys()
if self.enablerepo:
try:
new_repos = self.yum_base.repos.repos.keys()
for i in new_repos:
if i not in current_repos:
rid = self.yum_base.repos.getRepo(i)
a = rid.repoXML.repoid # nopep8 - https://github.com/ansible/ansible/pull/21475#pullrequestreview-22404868
current_repos = new_repos
except yum.Errors.YumBaseError as e:
self.module.fail_json(msg="Error setting/accessing repos: %s" % to_native(e))
except yum.Errors.YumBaseError as e:
self.module.fail_json(msg="Error accessing repos: %s" % to_native(e))
if self.state == 'latest' or self.update_only:
if self.disable_gpg_check:
self.yum_basecmd.append('--nogpgcheck')
if self.security:
self.yum_basecmd.append('--security')
if self.bugfix:
self.yum_basecmd.append('--bugfix')
res = self.latest(pkgs, repoq)
elif self.state in ('installed', 'present'):
if self.disable_gpg_check:
self.yum_basecmd.append('--nogpgcheck')
res = self.install(pkgs, repoq)
elif self.state in ('removed', 'absent'):
res = self.remove(pkgs, repoq)
else:
# should be caught by AnsibleModule argument_spec
self.module.fail_json(
msg="we should never get here unless this all failed",
changed=False,
results='',
errors='unexpected state'
)
return res
@staticmethod
def has_yum():
return HAS_YUM_PYTHON
def run(self):
"""
actually execute the module code backend
"""
error_msgs = []
if not HAS_RPM_PYTHON:
error_msgs.append('The Python 2 bindings for rpm are needed for this module. If you require Python 3 support use the `dnf` Ansible module instead.')
if not HAS_YUM_PYTHON:
error_msgs.append('The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead.')
self.wait_for_lock()
if error_msgs:
self.module.fail_json(msg='. '.join(error_msgs))
# fedora will redirect yum to dnf, which has incompatibilities
# with how this module expects yum to operate. If yum-deprecated
# is available, use that instead to emulate the old behaviors.
if self.module.get_bin_path('yum-deprecated'):
yumbin = self.module.get_bin_path('yum-deprecated')
else:
yumbin = self.module.get_bin_path('yum')
# need debug level 2 to get 'Nothing to do' for groupinstall.
self.yum_basecmd = [yumbin, '-d', '2', '-y']
if self.update_cache and not self.names and not self.list:
rc, stdout, stderr = self.module.run_command(self.yum_basecmd + ['clean', 'expire-cache'])
if rc == 0:
self.module.exit_json(
changed=False,
msg="Cache updated",
rc=rc,
results=[]
)
else:
self.module.exit_json(
changed=False,
msg="Failed to update cache",
rc=rc,
results=[stderr],
)
repoquerybin = self.module.get_bin_path('repoquery', required=False)
if self.install_repoquery and not repoquerybin and not self.module.check_mode:
yum_path = self.module.get_bin_path('yum')
if yum_path:
if self.releasever:
self.module.run_command('%s -y install yum-utils --releasever %s' % (yum_path, self.releasever))
else:
self.module.run_command('%s -y install yum-utils' % yum_path)
repoquerybin = self.module.get_bin_path('repoquery', required=False)
if self.list:
if not repoquerybin:
self.module.fail_json(msg="repoquery is required to use list= with this module. Please install the yum-utils package.")
results = {'results': self.list_stuff(repoquerybin, self.list)}
else:
# If rhn-plugin is installed and no rhn-certificate is available on
# the system then users will see an error message using the yum API.
# Use repoquery in those cases.
repoquery = None
try:
yum_plugins = self.yum_base.plugins._plugins
except AttributeError:
pass
else:
if 'rhnplugin' in yum_plugins:
if repoquerybin:
repoquery = [repoquerybin, '--show-duplicates', '--plugins', '--quiet']
if self.installroot != '/':
repoquery.extend(['--installroot', self.installroot])
if self.disable_excludes:
# repoquery does not support --disableexcludes,
# so make a temp copy of yum.conf and get rid of the 'exclude=' line there
try:
with open('/etc/yum.conf', 'r') as f:
content = f.readlines()
tmp_conf_file = tempfile.NamedTemporaryFile(dir=self.module.tmpdir, delete=False)
self.module.add_cleanup_file(tmp_conf_file.name)
tmp_conf_file.writelines([c for c in content if not c.startswith("exclude=")])
tmp_conf_file.close()
except Exception as e:
self.module.fail_json(msg="Failure setting up repoquery: %s" % to_native(e))
repoquery.extend(['-c', tmp_conf_file.name])
results = self.ensure(repoquery)
if repoquery:
results['msg'] = '%s %s' % (
results.get('msg', ''),
'Warning: Due to potential bad behaviour with rhnplugin and certificates, used slower repoquery calls instead of Yum API.'
)
self.module.exit_json(**results)
def main():
# state=installed name=pkgspec
# state=removed name=pkgspec
# state=latest name=pkgspec
#
# informational commands:
# list=installed
# list=updates
# list=available
# list=repos
# list=pkgspec
yumdnf_argument_spec['argument_spec']['use_backend'] = dict(default='auto', choices=['auto', 'yum', 'yum4', 'dnf'])
module = AnsibleModule(
**yumdnf_argument_spec
)
module_implementation = YumModule(module)
module_implementation.run()
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,237 |
yum module improperly reports error on successful package removal.
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The yum module is improperly reporting an error when successfully removing packages, even when the return code is 0, and the output shown clearly indicates success.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
yum
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.3
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/ansible/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
centos-release-7-7.1908.0.el7.centos.x86_64
centos-release-7-8.2003.0.el7.centos.x86_64
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run the playbook included below, which switches from the CentOS stock kernel to the elrepo mainline kernel. The external vars file is not actually necessary within this playbook, and is part of a playbook template. The sole variable defined within is not used in this sub-playbook.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
# Migrate to elrepo mainline kernel.
---
- hosts: "{{ current_targets }}"
become: yes
gather_facts: no
vars_files:
- "external_vars.yaml"
tasks:
- name: grub_set_default - Set the grub default.
command: grub2-set-default 0
- name: set_default_kernel - Set the default kernel in sysconfig.
replace:
path: /etc/sysconfig/kernel
regexp: '^(\s*DEFAULTKERNEL\s*=\s*).*$'
replace: \1kernel-ml
backup: yes
register: changed_sysconfig
- name: remake_grub - Re-make the grub configuration, if necessary.
command: grub2-mkconfig -o /boot/grub2/grub.cfg
when: changed_sysconfig is changed
- name: install_elrepo_repository - Install the elrepo repositories.
yum:
name: http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
state: present
register: added_elrepo
- name: enable_elrepo_kernel - Enable the elrepo kernel repository, if necessary.
command: 'yum-config-manager --enable elrepo-kernel'
- name: disable_elrepo - Disable the elrepo general repository, if necessary.
command: 'yum-config-manager --disable elrepo'
- name: install_kernel_ml - Install kernel-ml package.
yum:
name: kernel-ml
state: latest
register: installed_kernel_ml
- name: adjust_sysctl - Adjust sysctl settings for elrepo kernel.
template:
src: ../templates/elrepo_sysctl.conf
dest: /etc/sysctl.d/90_aljex.conf
owner: root
group: root
mode: '0644'
register: added_sysctl_block
- name: 'reboot_server - Reboot the server.'
reboot:
msg: 'Reboot initiated by Ansible.'
connect_timeout: 5
reboot_timeout: 600
pre_reboot_delay: 0
post_reboot_delay: 30
test_command: whoami
when: added_sysctl_block is changed
- name: erase_old_packages - Erase old, unneeded kernel dev packages.
yum:
name:
- kernel-devel
- systemtap
- systemtap-devel
state: absent
register: erased_old_kernel_dev_packages
when: installed_kernel_ml is changed
- name: remove_stock_kernel - Remove the stock CentOS kernel.
yum:
name: kernel
state: absent
when: added_sysctl_block is changed
- name: 'reboot_server - Reboot the server.'
reboot:
msg: 'Reboot initiated by Ansible.'
connect_timeout: 5
reboot_timeout: 600
pre_reboot_delay: 0
post_reboot_delay: 30
test_command: whoami
when: added_sysctl_block is changed
- name: erase_rpms - Erase specific RPMs manually, rather than via yum.
command: rpm -e compat-glibc-headers glibc-headers compat-glibc glibc-devel.x86_64 glibc-devel.i686 gcc libtool libquadmath-devel gcc-gfortran gcc-c++ kernel-headers kernel-tools kernel-tools-libs
args:
warn: false
when: erased_old_kernel_dev_packages is changed
- name: install_new_kernel_dev_packages
yum:
name:
- compat-glibc-headers
- glibc-headers
- compat-glibc
- glibc-devel.x86_64
- glibc-devel.i686
- gcc
- libtool
- libquadmath-devel
- gcc-gfortran
- gcc-c++
- systemtap-devel
- systemtap
- kernel-ml-headers
- kernel-ml-devel
- kernel-ml-tools
- kernel-ml-tools-libs
- python-perf
state: latest
register: installed_new_kernel_dev_packages
- name: 'reboot_server - Reboot the server.'
reboot:
msg: 'Reboot initiated by Ansible.'
connect_timeout: 5
reboot_timeout: 600
pre_reboot_delay: 0
post_reboot_delay: 30
test_command: whoami
when: installed_new_kernel_dev_packages is changed
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
All tasks should be completed successfully, including the one which is actually succeeding, but getting flagged as a failure.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The remove_stock_kernel task erroneously gets flagged as a failure:
```
fatal: [REDACTED]: FAILED! => {"changed": false, "changes": {"removed": ["kernel"]}, "msg": "Package 'kernel' couldn't be removed!", "rc": 0, "results": ["Loaded plugins: fastestmirror\nResolving Dependencies\n--> Running transaction check\n---> Package kernel.x86_64 0:3.10.0-1062.el7 will be erased\n--> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nRemoving:\n kernel x86_64 3.10.0-1062.el7 @anaconda 64 M\n\nTransaction Summary\n================================================================================\nRemove 1 Package\n\nInstalled size: 64 M\nDownloading packages:\nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Erasing : kernel-3.10.0-1062.el7.x86_64 1/1 \n Verifying : kernel-3.10.0-1062.el7.x86_64 1/1 \n\nRemoved:\n kernel.x86_64 0:3.10.0-1062.el7
```
The rest of the playbook is never executed due to the 'error'.
I have confirmed that the kernel packages are no longer installed, and were removed successfully.
<!--- Paste verbatim command output between quotes -->
```paste below
ansible-playbook -i inventory -e 'current_targets=testing' playbooks/elrepo_kernel.yaml
```
|
https://github.com/ansible/ansible/issues/69237
|
https://github.com/ansible/ansible/pull/69592
|
f7dfa817ae6542509e0c6eb437ea7bcc51242ca2
|
4aff87770ebab4e11761f4ec3b42834cad648c09
| 2020-04-29T13:10:49Z |
python
| 2020-05-26T18:47:39Z |
test/integration/targets/yum/tasks/yum.yml
|
# UNINSTALL
- name: uninstall sos
yum: name=sos state=removed
register: yum_result
- name: check sos with rpm
shell: rpm -q sos
ignore_errors: True
register: rpm_result
- name: verify uninstallation of sos
assert:
that:
- "yum_result is success"
- "rpm_result is failed"
# UNINSTALL AGAIN
- name: uninstall sos again in check mode
yum: name=sos state=removed
check_mode: true
register: yum_result
- name: verify no change on re-uninstall in check mode
assert:
that:
- "not yum_result is changed"
- name: uninstall sos again
yum: name=sos state=removed
register: yum_result
- name: verify no change on re-uninstall
assert:
that:
- "not yum_result is changed"
# INSTALL
- name: install sos in check mode
yum: name=sos state=present
check_mode: true
register: yum_result
- name: verify installation of sos in check mode
assert:
that:
- "yum_result is changed"
- name: install sos
yum: name=sos state=present
register: yum_result
- name: verify installation of sos
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: check sos with rpm
shell: rpm -q sos
# INSTALL AGAIN
- name: install sos again in check mode
yum: name=sos state=present
check_mode: true
register: yum_result
- name: verify no change on second install in check mode
assert:
that:
- "not yum_result is changed"
- name: install sos again
yum: name=sos state=present
register: yum_result
- name: verify no change on second install
assert:
that:
- "not yum_result is changed"
- name: install sos again with empty string enablerepo
yum: name=sos state=present enablerepo=""
register: yum_result
- name: verify no change on third install with empty string enablerepo
assert:
that:
- "yum_result is success"
- "not yum_result is changed"
# This test case is unfortunately distro specific because we have to specify
# repo names which are not the same across Fedora/RHEL/CentOS for base/updates
- name: install sos again with missing repo enablerepo
yum:
name: sos
state: present
enablerepo:
- "thisrepodoesnotexist"
- "base"
- "updates"
disablerepo: "*"
register: yum_result
when: ansible_distribution == 'CentOS'
- name: verify no change on fourth install with missing repo enablerepo (yum)
assert:
that:
- "yum_result is success"
- "yum_result is not changed"
when: ansible_distribution == 'CentOS'
# This test case is unfortunately distro specific because we have to specify
# repo names which are not the same across Fedora/RHEL/CentOS for base/updates
- name: install sos again with disable all and enable select repo(s)
yum:
name: sos
state: present
enablerepo:
- "base"
- "updates"
disablerepo: "*"
register: yum_result
when: ansible_distribution == 'CentOS'
- name: verify no change on fourth install with missing repo enablerepo (yum)
assert:
that:
- "yum_result is success"
- "yum_result is not changed"
when: ansible_distribution == 'CentOS'
- name: install sos again with only missing repo enablerepo
yum:
name: sos
state: present
enablerepo: "thisrepodoesnotexist"
ignore_errors: true
register: yum_result
- name: verify no change on fifth install with only missing repo enablerepo (yum)
assert:
that:
- "yum_result is not success"
when: ansible_pkg_mgr == 'yum'
- name: verify no change on fifth install with only missing repo enablerepo (dnf)
assert:
that:
- "yum_result is success"
when: ansible_pkg_mgr == 'dnf'
# INSTALL AGAIN WITH LATEST
- name: install sos again with state latest in check mode
yum: name=sos state=latest
check_mode: true
register: yum_result
- name: verify install sos again with state latest in check mode
assert:
that:
- "not yum_result is changed"
- name: install sos again with state latest idempotence
yum: name=sos state=latest
register: yum_result
- name: verify install sos again with state latest idempotence
assert:
that:
- "not yum_result is changed"
# INSTALL WITH LATEST
- name: uninstall sos
yum: name=sos state=removed
register: yum_result
- name: verify uninstall sos
assert:
that:
- "yum_result is successful"
- name: copy yum.conf file in case it is missing
copy:
src: yum.conf
dest: /etc/yum.conf
force: False
register: yum_conf_copy
- block:
- name: install sos with state latest in check mode with config file param
yum: name=sos state=latest conf_file=/etc/yum.conf
check_mode: true
register: yum_result
- name: verify install sos with state latest in check mode with config file param
assert:
that:
- "yum_result is changed"
always:
- name: remove tmp yum.conf file if we created it
file:
path: /etc/yum.conf
state: absent
when: yum_conf_copy is changed
- name: install sos with state latest in check mode
yum: name=sos state=latest
check_mode: true
register: yum_result
- name: verify install sos with state latest in check mode
assert:
that:
- "yum_result is changed"
- name: install sos with state latest
yum: name=sos state=latest
register: yum_result
- name: verify install sos with state latest
assert:
that:
- "yum_result is changed"
- name: install sos with state latest idempotence
yum: name=sos state=latest
register: yum_result
- name: verify install sos with state latest idempotence
assert:
that:
- "not yum_result is changed"
- name: install sos with state latest idempotence with config file param
yum: name=sos state=latest
register: yum_result
- name: verify install sos with state latest idempotence with config file param
assert:
that:
- "not yum_result is changed"
# Multiple packages
- name: uninstall sos and bc
yum: name=sos,bc state=removed
- name: check sos with rpm
shell: rpm -q sos
ignore_errors: True
register: rpm_sos_result
- name: check bc with rpm
shell: rpm -q bc
ignore_errors: True
register: rpm_bc_result
- name: verify packages installed
assert:
that:
- "rpm_sos_result is failed"
- "rpm_bc_result is failed"
- name: install sos and bc as comma separated
yum: name=sos,bc state=present
register: yum_result
- name: verify packages installed
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: check sos with rpm
shell: rpm -q sos
- name: check bc with rpm
shell: rpm -q bc
- name: uninstall sos and bc
yum: name=sos,bc state=removed
register: yum_result
- name: install sos and bc as list
yum:
name:
- sos
- bc
state: present
register: yum_result
- name: verify packages installed
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: check sos with rpm
shell: rpm -q sos
- name: check bc with rpm
shell: rpm -q bc
- name: uninstall sos and bc
yum: name=sos,bc state=removed
register: yum_result
- name: install sos and bc as comma separated with spaces
yum:
name: "sos, bc"
state: present
register: yum_result
- name: verify packages installed
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: check sos with rpm
shell: rpm -q sos
- name: check bc with rpm
shell: rpm -q bc
- name: uninstall sos and bc
yum: name=sos,bc state=removed
- name: install non-existent rpm
yum:
name: does-not-exist
register: non_existent_rpm
ignore_errors: True
- name: check non-existent rpm install failed
assert:
that:
- non_existent_rpm is failed
# Install in installroot='/'
- name: install sos
yum: name=sos state=present installroot='/'
register: yum_result
- name: verify installation of sos
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: check sos with rpm
shell: rpm -q sos --root=/
- name: uninstall sos
yum:
name: sos
installroot: '/'
state: removed
register: yum_result
- name: Test download_only
yum:
name: sos
state: latest
download_only: true
register: yum_result
- name: verify download of sos (part 1 -- yum "install" succeeded)
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: uninstall sos (noop)
yum:
name: sos
state: removed
register: yum_result
- name: verify download of sos (part 2 -- nothing removed during uninstall)
assert:
that:
- "yum_result is success"
- "not yum_result is changed"
- name: uninstall sos for downloadonly/downloaddir test
yum:
name: sos
state: absent
- name: Test download_only/download_dir
yum:
name: sos
state: latest
download_only: true
download_dir: "/var/tmp/packages"
register: yum_result
- name: verify yum output
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- command: "ls /var/tmp/packages"
register: ls_out
- name: Verify specified download_dir was used
assert:
that:
- "'sos' in ls_out.stdout"
- name: install group
yum:
name: "@Custom Group"
state: present
register: yum_result
- name: verify installation of the group
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: install the group again
yum:
name: "@Custom Group"
state: present
register: yum_result
- name: verify nothing changed
assert:
that:
- "yum_result is success"
- "not yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: install the group again but also with a package that is not yet installed
yum:
name:
- "@Custom Group"
- sos
state: present
register: yum_result
- name: verify sos is installed
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: try to install the group again, with --check to check 'changed'
yum:
name: "@Custom Group"
state: present
check_mode: yes
register: yum_result
- name: verify nothing changed
assert:
that:
- "not yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: try to install non existing group
yum:
name: "@non-existing-group"
state: present
register: yum_result
ignore_errors: True
- name: verify installation of the non existing group failed
assert:
that:
- "yum_result is failed"
- "not yum_result is changed"
- "yum_result is failed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: try to install non existing file
yum:
name: /tmp/non-existing-1.0.0.fc26.noarch.rpm
state: present
register: yum_result
ignore_errors: yes
- name: verify installation failed
assert:
that:
- "yum_result is failed"
- "not yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- name: try to install from non existing url
yum:
name: https://s3.amazonaws.com/ansible-ci-files/test/integration/targets/yum/non-existing-1.0.0.fc26.noarch.rpm
state: present
register: yum_result
ignore_errors: yes
- name: verify installation failed
assert:
that:
- "yum_result is failed"
- "not yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- name: use latest to install httpd
yum:
name: httpd
state: latest
register: yum_result
- name: verify httpd was installed
assert:
that:
- "'changed' in yum_result"
- name: uninstall httpd
yum:
name: httpd
state: removed
- name: update httpd only if it exists
yum:
name: httpd
state: latest
update_only: yes
register: yum_result
- name: verify httpd not installed
assert:
that:
- "not yum_result is changed"
- "'Packages providing httpd not installed due to update_only specified' in yum_result.results"
- name: try to install uncompatible arch rpm on non-ppc64le, should fail
yum:
name: https://s3.amazonaws.com/ansible-ci-files/test/integration/targets/yum/banner-1.3.4-3.el7.ppc64le.rpm
state: present
register: yum_result
ignore_errors: True
when:
- ansible_architecture not in ['ppc64le']
- name: verify that yum failed on non-ppc64le
assert:
that:
- "not yum_result is changed"
- "yum_result is failed"
when:
- ansible_architecture not in ['ppc64le']
- name: try to install uncompatible arch rpm on ppc64le, should fail
yum:
name: https://s3.amazonaws.com/ansible-ci-files/test/integration/targets/yum/tinyproxy-1.10.0-3.el7.x86_64.rpm
state: present
register: yum_result
ignore_errors: True
when:
- ansible_architecture in ['ppc64le']
- name: verify that yum failed on ppc64le
assert:
that:
- "not yum_result is changed"
- "yum_result is failed"
when:
- ansible_architecture in ['ppc64le']
# setup for testing installing an RPM from url
- set_fact:
pkg_name: fpaste
- name: cleanup
yum:
name: "{{ pkg_name }}"
state: absent
- set_fact:
pkg_url: https://s3.amazonaws.com/ansible-ci-files/test/integration/targets/yum/fpaste-0.3.7.4.1-2.el7.noarch.rpm
when: ansible_python.version.major == 2
- set_fact:
pkg_url: https://s3.amazonaws.com/ansible-ci-files/test/integration/targets/yum/fpaste-0.3.9.2-1.fc28.noarch.rpm
when: ansible_python.version.major == 3
# setup end
- name: download an rpm
get_url:
url: "{{ pkg_url }}"
dest: "/tmp/{{ pkg_name }}.rpm"
- name: install the downloaded rpm
yum:
name: "/tmp/{{ pkg_name }}.rpm"
state: present
register: yum_result
- name: verify installation
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- "yum_result is not failed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: install the downloaded rpm again
yum:
name: "/tmp/{{ pkg_name }}.rpm"
state: present
register: yum_result
- name: verify installation
assert:
that:
- "yum_result is success"
- "not yum_result is changed"
- "yum_result is not failed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: clean up
yum:
name: "{{ pkg_name }}"
state: absent
- name: install from url
yum:
name: "{{ pkg_url }}"
state: present
register: yum_result
- name: verify installation
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- "yum_result is not failed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: Create a temp RPM file which does not contain nevra information
file:
name: "/tmp/non_existent_pkg.rpm"
state: touch
- name: Try installing RPM file which does not contain nevra information
yum:
name: "/tmp/non_existent_pkg.rpm"
state: present
register: no_nevra_info_result
ignore_errors: yes
- name: Verify RPM failed to install
assert:
that:
- "'changed' in no_nevra_info_result"
- "'msg' in no_nevra_info_result"
- name: Delete a temp RPM file
file:
name: "/tmp/non_existent_pkg.rpm"
state: absent
- name: get yum version
yum:
list: yum
register: yum_version
- name: set yum_version of installed version
set_fact:
yum_version: "{%- if item.yumstate == 'installed' -%}{{ item.version }}{%- else -%}{{ yum_version }}{%- endif -%}"
with_items: "{{ yum_version.results }}"
- name: Ensure double uninstall of wildcard globs works
block:
- name: "Install lohit-*-fonts"
yum:
name: "lohit-*-fonts"
state: present
- name: "Remove lohit-*-fonts (1st time)"
yum:
name: "lohit-*-fonts"
state: absent
register: remove_lohit_fonts_1
- name: "Verify lohit-*-fonts (1st time)"
assert:
that:
- "remove_lohit_fonts_1 is changed"
- "'msg' in remove_lohit_fonts_1"
- "'results' in remove_lohit_fonts_1"
- name: "Remove lohit-*-fonts (2nd time)"
yum:
name: "lohit-*-fonts"
state: absent
register: remove_lohit_fonts_2
- name: "Verify lohit-*-fonts (2nd time)"
assert:
that:
- "remove_lohit_fonts_2 is not changed"
- "'msg' in remove_lohit_fonts_2"
- "'results' in remove_lohit_fonts_2"
- "'lohit-*-fonts is not installed' in remove_lohit_fonts_2['results']"
- block:
- name: uninstall bc
yum: name=bc state=removed
- name: check bc with rpm
shell: rpm -q bc
ignore_errors: True
register: rpm_bc_result
- name: verify bc is uninstalled
assert:
that:
- "rpm_bc_result is failed"
- name: exclude bc (yum backend)
lineinfile:
dest: /etc/yum.conf
regexp: (^exclude=)(.)*
line: "exclude=bc*"
state: present
when: ansible_pkg_mgr == 'yum'
- name: exclude bc (dnf backend)
lineinfile:
dest: /etc/dnf/dnf.conf
regexp: (^excludepkgs=)(.)*
line: "excludepkgs=bc*"
state: present
when: ansible_pkg_mgr == 'dnf'
# begin test case where disable_excludes is supported
- name: Try install bc without disable_excludes
yum: name=bc state=latest
register: yum_bc_result
ignore_errors: True
- name: verify bc did not install because it is in exclude list
assert:
that:
- "yum_bc_result is failed"
- name: install bc with disable_excludes
yum: name=bc state=latest disable_excludes=all
register: yum_bc_result_using_excludes
- name: verify bc did install using disable_excludes=all
assert:
that:
- "yum_bc_result_using_excludes is success"
- "yum_bc_result_using_excludes is changed"
- "yum_bc_result_using_excludes is not failed"
- name: remove exclude bc (cleanup yum.conf)
lineinfile:
dest: /etc/yum.conf
regexp: (^exclude=bc*)
line: "exclude="
state: present
when: ansible_pkg_mgr == 'yum'
- name: remove exclude bc (cleanup dnf.conf)
lineinfile:
dest: /etc/dnf/dnf.conf
regexp: (^excludepkgs=bc*)
line: "excludepkgs="
state: present
when: ansible_pkg_mgr == 'dnf'
# Fedora < 26 has a bug in dnf where package excludes in dnf.conf aren't
# actually honored and those releases are EOL'd so we have no expectation they
# will ever be fixed
when: not ((ansible_distribution == "Fedora") and (ansible_distribution_major_version|int < 26))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,966 |
Ansible blockinfile ^M bug!?
|
### SUMMARY
In SuSe SLES there is a file: /etc/vimrc. It contains ^M characters, like in line: 101 containing:
map! <ESC>OM ^M
When running module blockinfile on this file it removes the ^M characters.
The blockinfile just needs to adds some lines on the end of the file. Even when the lines are already there it removes the ^M characters.
It looks like that it reads the file and cannot handle the ^M characters and writes the buffer without the characters to the file.
##### ISSUE TYPE
- Module blockinfile
##### COMPONENT NAME
blockinfile
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.4.2.0
config file = None
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/64966
|
https://github.com/ansible/ansible/pull/66461
|
8b6c02fc6979ba758b2f06aecee18995f13b2d9c
|
e5cc12a64f274e36738fe51d4caa533003ed626b
| 2019-11-17T20:11:01Z |
python
| 2020-05-27T15:05:07Z |
changelogs/fragments/66461-blockinfile_preserve_line_endings.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,966 |
Ansible blockinfile ^M bug!?
|
### SUMMARY
In SuSe SLES there is a file: /etc/vimrc. It contains ^M characters, like in line: 101 containing:
map! <ESC>OM ^M
When running module blockinfile on this file it removes the ^M characters.
The blockinfile just needs to adds some lines on the end of the file. Even when the lines are already there it removes the ^M characters.
It looks like that it reads the file and cannot handle the ^M characters and writes the buffer without the characters to the file.
##### ISSUE TYPE
- Module blockinfile
##### COMPONENT NAME
blockinfile
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.4.2.0
config file = None
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/64966
|
https://github.com/ansible/ansible/pull/66461
|
8b6c02fc6979ba758b2f06aecee18995f13b2d9c
|
e5cc12a64f274e36738fe51d4caa533003ed626b
| 2019-11-17T20:11:01Z |
python
| 2020-05-27T15:05:07Z |
lib/ansible/modules/blockinfile.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2014, 2015 YAEGASHI Takeshi <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'core'}
DOCUMENTATION = r'''
---
module: blockinfile
short_description: Insert/update/remove a text block surrounded by marker lines
version_added: '2.0'
description:
- This module will insert/update/remove a block of multi-line text surrounded by customizable marker lines.
author:
- Yaegashi Takeshi (@yaegashi)
options:
path:
description:
- The file to modify.
- Before Ansible 2.3 this option was only usable as I(dest), I(destfile) and I(name).
type: path
required: yes
aliases: [ dest, destfile, name ]
state:
description:
- Whether the block should be there or not.
type: str
choices: [ absent, present ]
default: present
marker:
description:
- The marker line template.
- C({mark}) will be replaced with the values C(in marker_begin) (default="BEGIN") and C(marker_end) (default="END").
- Using a custom marker without the C({mark}) variable may result in the block being repeatedly inserted on subsequent playbook runs.
type: str
default: '# {mark} ANSIBLE MANAGED BLOCK'
block:
description:
- The text to insert inside the marker lines.
- If it is missing or an empty string, the block will be removed as if C(state) were specified to C(absent).
type: str
default: ''
aliases: [ content ]
insertafter:
description:
- If specified, the block will be inserted after the last match of specified regular expression.
- A special value is available; C(EOF) for inserting the block at the end of the file.
- If specified regular expression has no matches, C(EOF) will be used instead.
type: str
choices: [ EOF, '*regex*' ]
default: EOF
insertbefore:
description:
- If specified, the block will be inserted before the last match of specified regular expression.
- A special value is available; C(BOF) for inserting the block at the beginning of the file.
- If specified regular expression has no matches, the block will be inserted at the end of the file.
type: str
choices: [ BOF, '*regex*' ]
create:
description:
- Create a new file if it does not exist.
type: bool
default: no
backup:
description:
- Create a backup file including the timestamp information so you can
get the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
marker_begin:
description:
- This will be inserted at C({mark}) in the opening ansible block marker.
type: str
default: BEGIN
version_added: '2.5'
marker_end:
required: false
description:
- This will be inserted at C({mark}) in the closing ansible block marker.
type: str
default: END
version_added: '2.5'
notes:
- This module supports check mode.
- When using 'with_*' loops be aware that if you do not set a unique mark the block will be overwritten on each iteration.
- As of Ansible 2.3, the I(dest) option has been changed to I(path) as default, but I(dest) still works as well.
- Option I(follow) has been removed in Ansible 2.5, because this module modifies the contents of the file so I(follow=no) doesn't make sense.
- When more then one block should be handled in one file you must change the I(marker) per task.
extends_documentation_fragment:
- files
- validate
'''
EXAMPLES = r'''
# Before Ansible 2.3, option 'dest' or 'name' was used instead of 'path'
- name: Insert/Update "Match User" configuration block in /etc/ssh/sshd_config
blockinfile:
path: /etc/ssh/sshd_config
block: |
Match User ansible-agent
PasswordAuthentication no
- name: Insert/Update eth0 configuration stanza in /etc/network/interfaces
(it might be better to copy files into /etc/network/interfaces.d/)
blockinfile:
path: /etc/network/interfaces
block: |
iface eth0 inet static
address 192.0.2.23
netmask 255.255.255.0
- name: Insert/Update configuration using a local file and validate it
blockinfile:
block: "{{ lookup('file', './local/sshd_config') }}"
path: /etc/ssh/sshd_config
backup: yes
validate: /usr/sbin/sshd -T -f %s
- name: Insert/Update HTML surrounded by custom markers after <body> line
blockinfile:
path: /var/www/html/index.html
marker: "<!-- {mark} ANSIBLE MANAGED BLOCK -->"
insertafter: "<body>"
block: |
<h1>Welcome to {{ ansible_hostname }}</h1>
<p>Last updated on {{ ansible_date_time.iso8601 }}</p>
- name: Remove HTML as well as surrounding markers
blockinfile:
path: /var/www/html/index.html
marker: "<!-- {mark} ANSIBLE MANAGED BLOCK -->"
block: ""
- name: Add mappings to /etc/hosts
blockinfile:
path: /etc/hosts
block: |
{{ item.ip }} {{ item.name }}
marker: "# {mark} ANSIBLE MANAGED BLOCK {{ item.name }}"
loop:
- { name: host1, ip: 10.10.1.10 }
- { name: host2, ip: 10.10.1.11 }
- { name: host3, ip: 10.10.1.12 }
'''
import re
import os
import tempfile
from ansible.module_utils.six import b
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_bytes
def write_changes(module, contents, path):
tmpfd, tmpfile = tempfile.mkstemp(dir=module.tmpdir)
f = os.fdopen(tmpfd, 'wb')
f.write(contents)
f.close()
validate = module.params.get('validate', None)
valid = not validate
if validate:
if "%s" not in validate:
module.fail_json(msg="validate must contain %%s: %s" % (validate))
(rc, out, err) = module.run_command(validate % tmpfile)
valid = rc == 0
if rc != 0:
module.fail_json(msg='failed to validate: '
'rc:%s error:%s' % (rc, err))
if valid:
module.atomic_move(tmpfile, path, unsafe_writes=module.params['unsafe_writes'])
def check_file_attrs(module, changed, message, diff):
file_args = module.load_file_common_arguments(module.params)
if module.set_file_attributes_if_different(file_args, False, diff=diff):
if changed:
message += " and "
changed = True
message += "ownership, perms or SE linux context changed"
return message, changed
def main():
module = AnsibleModule(
argument_spec=dict(
path=dict(type='path', required=True, aliases=['dest', 'destfile', 'name']),
state=dict(type='str', default='present', choices=['absent', 'present']),
marker=dict(type='str', default='# {mark} ANSIBLE MANAGED BLOCK'),
block=dict(type='str', default='', aliases=['content']),
insertafter=dict(type='str'),
insertbefore=dict(type='str'),
create=dict(type='bool', default=False),
backup=dict(type='bool', default=False),
validate=dict(type='str'),
marker_begin=dict(type='str', default='BEGIN'),
marker_end=dict(type='str', default='END'),
),
mutually_exclusive=[['insertbefore', 'insertafter']],
add_file_common_args=True,
supports_check_mode=True
)
params = module.params
path = params['path']
if os.path.isdir(path):
module.fail_json(rc=256,
msg='Path %s is a directory !' % path)
path_exists = os.path.exists(path)
if not path_exists:
if not module.boolean(params['create']):
module.fail_json(rc=257,
msg='Path %s does not exist !' % path)
destpath = os.path.dirname(path)
if not os.path.exists(destpath) and not module.check_mode:
try:
os.makedirs(destpath)
except Exception as e:
module.fail_json(msg='Error creating %s Error code: %s Error description: %s' % (destpath, e[0], e[1]))
original = None
lines = []
else:
f = open(path, 'rb')
original = f.read()
f.close()
lines = original.splitlines()
diff = {'before': '',
'after': '',
'before_header': '%s (content)' % path,
'after_header': '%s (content)' % path}
if module._diff and original:
diff['before'] = original
insertbefore = params['insertbefore']
insertafter = params['insertafter']
block = to_bytes(params['block'])
marker = to_bytes(params['marker'])
present = params['state'] == 'present'
if not present and not path_exists:
module.exit_json(changed=False, msg="File %s not present" % path)
if insertbefore is None and insertafter is None:
insertafter = 'EOF'
if insertafter not in (None, 'EOF'):
insertre = re.compile(to_bytes(insertafter, errors='surrogate_or_strict'))
elif insertbefore not in (None, 'BOF'):
insertre = re.compile(to_bytes(insertbefore, errors='surrogate_or_strict'))
else:
insertre = None
marker0 = re.sub(b(r'{mark}'), b(params['marker_begin']), marker)
marker1 = re.sub(b(r'{mark}'), b(params['marker_end']), marker)
if present and block:
# Escape sequences like '\n' need to be handled in Ansible 1.x
if module.ansible_version.startswith('1.'):
block = re.sub('', block, '')
blocklines = [marker0] + block.splitlines() + [marker1]
else:
blocklines = []
n0 = n1 = None
for i, line in enumerate(lines):
if line == marker0:
n0 = i
if line == marker1:
n1 = i
if None in (n0, n1):
n0 = None
if insertre is not None:
for i, line in enumerate(lines):
if insertre.search(line):
n0 = i
if n0 is None:
n0 = len(lines)
elif insertafter is not None:
n0 += 1
elif insertbefore is not None:
n0 = 0 # insertbefore=BOF
else:
n0 = len(lines) # insertafter=EOF
elif n0 < n1:
lines[n0:n1 + 1] = []
else:
lines[n1:n0 + 1] = []
n0 = n1
lines[n0:n0] = blocklines
if lines:
result = b('\n').join(lines)
if original is None or original.endswith(b('\n')):
result += b('\n')
else:
result = b''
if module._diff:
diff['after'] = result
if original == result:
msg = ''
changed = False
elif original is None:
msg = 'File created'
changed = True
elif not blocklines:
msg = 'Block removed'
changed = True
else:
msg = 'Block inserted'
changed = True
if changed and not module.check_mode:
if module.boolean(params['backup']) and path_exists:
module.backup_local(path)
# We should always follow symlinks so that we change the real file
real_path = os.path.realpath(params['path'])
write_changes(module, result, real_path)
if module.check_mode and not path_exists:
module.exit_json(changed=changed, msg=msg, diff=diff)
attr_diff = {}
msg, changed = check_file_attrs(module, changed, msg, attr_diff)
attr_diff['before_header'] = '%s (file attributes)' % path
attr_diff['after_header'] = '%s (file attributes)' % path
difflist = [diff, attr_diff]
module.exit_json(changed=changed, msg=msg, diff=difflist)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,966 |
Ansible blockinfile ^M bug!?
|
### SUMMARY
In SuSe SLES there is a file: /etc/vimrc. It contains ^M characters, like in line: 101 containing:
map! <ESC>OM ^M
When running module blockinfile on this file it removes the ^M characters.
The blockinfile just needs to adds some lines on the end of the file. Even when the lines are already there it removes the ^M characters.
It looks like that it reads the file and cannot handle the ^M characters and writes the buffer without the characters to the file.
##### ISSUE TYPE
- Module blockinfile
##### COMPONENT NAME
blockinfile
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.4.2.0
config file = None
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/64966
|
https://github.com/ansible/ansible/pull/66461
|
8b6c02fc6979ba758b2f06aecee18995f13b2d9c
|
e5cc12a64f274e36738fe51d4caa533003ed626b
| 2019-11-17T20:11:01Z |
python
| 2020-05-27T15:05:07Z |
test/integration/targets/blockinfile/tasks/main.yml
|
# Test code for the blockinfile module.
# (c) 2017, James Tanner <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
- set_fact:
output_dir_test: "{{ output_dir }}/test_blockinfile"
- name: make sure our testing sub-directory does not exist
file:
path: "{{ output_dir_test }}"
state: absent
- name: create our testing sub-directory
file:
path: "{{ output_dir_test }}"
state: directory
##
## blockinfile
##
- name: copy the sshd_config to the test dir
copy:
src: sshd_config
dest: "{{ output_dir_test }}"
- name: insert/update "Match User" configuration block in sshd_config
blockinfile:
path: "{{ output_dir_test }}/sshd_config"
block: |
Match User ansible-agent
PasswordAuthentication no
register: blockinfile_test0
- name: check content
shell: 'grep -e "Match User ansible-agent" -e "PasswordAuthentication no" {{ output_dir_test }}/sshd_config'
register: blockinfile_test0_grep
- debug:
var: blockinfile_test0
verbosity: 1
- debug:
var: blockinfile_test0_grep
verbosity: 1
- name: validate first example results
assert:
that:
- 'blockinfile_test0.changed is defined'
- 'blockinfile_test0.msg is defined'
- 'blockinfile_test0.changed'
- 'blockinfile_test0.msg == "Block inserted"'
- 'blockinfile_test0_grep.stdout_lines | length == 2'
- name: check idemptotence
blockinfile:
path: "{{ output_dir_test }}/sshd_config"
block: |
Match User ansible-agent
PasswordAuthentication no
register: blockinfile_test1
- name: validate idempotence results
assert:
that:
- 'not blockinfile_test1.changed'
- name: Create a file with blockinfile
blockinfile:
path: "{{ output_dir_test }}/empty.txt"
block: |
Hey
there
state: present
create: yes
register: empty_test_1
- name: Run a task that results in an empty file
blockinfile:
path: "{{ output_dir_test }}/empty.txt"
block: |
Hey
there
state: absent
create: yes
register: empty_test_2
- stat:
path: "{{ output_dir_test }}/empty.txt"
register: empty_test_stat
- name: Ensure empty file was created
assert:
that:
- empty_test_1 is changed
- "'File created' in empty_test_1.msg"
- empty_test_2 is changed
- "'Block removed' in empty_test_2.msg"
- empty_test_stat.stat.size == 0
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,009 |
CLI help for 'ansible-galaxy' not properly updated for 'collection' subcommand
|
##### SUMMARY
With the introduction of `Collection` mechanism using the `ansible-galaxy` CLI, the command and sub command help texts should be reflecting the new feature. The CLI currently validates the `collection` sub-command but the help text is still saying `role` is the only valid sub-command.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`ansible-galaxy`
##### ANSIBLE VERSION
```
ansible 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/dvercill/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/dvercill/.local/lib/python3.8/site-packages/ansible
executable location = /home/dvercill/.local/bin/ansible
python version = 3.8.2 (default, Feb 28 2020, 00:00:00) [GCC 10.0.1 20200216 (Red Hat 10.0.1-0.8)]
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
Fedora Workstation 32 beta
##### STEPS TO REPRODUCE
```
$ ansible-galaxy doesnotexist
usage: ansible-galaxy role [-h] ROLE_ACTION ...
ansible-galaxy role: error: argument ROLE_ACTION: invalid choice: 'doesnotexist' (choose from 'init', 'remove', 'delete', 'list', 'search', 'import', 'setup', 'login', 'info', 'install')
```
|
https://github.com/ansible/ansible/issues/69009
|
https://github.com/ansible/ansible/pull/69458
|
4dd0f41270a734e307984e3e80b19d5e96069c28
|
187de7a8aaaf125c12b8a440c5362166eff30358
| 2020-04-17T17:28:25Z |
python
| 2020-05-28T14:38:48Z |
changelogs/fragments/69458-updated-galaxy-cli-help.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,009 |
CLI help for 'ansible-galaxy' not properly updated for 'collection' subcommand
|
##### SUMMARY
With the introduction of `Collection` mechanism using the `ansible-galaxy` CLI, the command and sub command help texts should be reflecting the new feature. The CLI currently validates the `collection` sub-command but the help text is still saying `role` is the only valid sub-command.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`ansible-galaxy`
##### ANSIBLE VERSION
```
ansible 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/dvercill/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/dvercill/.local/lib/python3.8/site-packages/ansible
executable location = /home/dvercill/.local/bin/ansible
python version = 3.8.2 (default, Feb 28 2020, 00:00:00) [GCC 10.0.1 20200216 (Red Hat 10.0.1-0.8)]
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
Fedora Workstation 32 beta
##### STEPS TO REPRODUCE
```
$ ansible-galaxy doesnotexist
usage: ansible-galaxy role [-h] ROLE_ACTION ...
ansible-galaxy role: error: argument ROLE_ACTION: invalid choice: 'doesnotexist' (choose from 'init', 'remove', 'delete', 'list', 'search', 'import', 'setup', 'login', 'info', 'install')
```
|
https://github.com/ansible/ansible/issues/69009
|
https://github.com/ansible/ansible/pull/69458
|
4dd0f41270a734e307984e3e80b19d5e96069c28
|
187de7a8aaaf125c12b8a440c5362166eff30358
| 2020-04-17T17:28:25Z |
python
| 2020-05-28T14:38:48Z |
lib/ansible/cli/__init__.py
|
# Copyright: (c) 2012-2014, Michael DeHaan <[email protected]>
# Copyright: (c) 2016, Toshio Kuratomi <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import getpass
import os
import re
import subprocess
import sys
from abc import ABCMeta, abstractmethod
from ansible.cli.arguments import option_helpers as opt_help
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError
from ansible.inventory.manager import InventoryManager
from ansible.module_utils.six import with_metaclass, string_types
from ansible.module_utils._text import to_bytes, to_text
from ansible.parsing.dataloader import DataLoader
from ansible.parsing.vault import PromptVaultSecret, get_file_vault_secret
from ansible.plugins.loader import add_all_plugin_dirs
from ansible.release import __version__
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path
from ansible.utils.display import Display
from ansible.utils.path import unfrackpath
from ansible.utils.unsafe_proxy import to_unsafe_text
from ansible.vars.manager import VariableManager
try:
import argcomplete
HAS_ARGCOMPLETE = True
except ImportError:
HAS_ARGCOMPLETE = False
display = Display()
class CLI(with_metaclass(ABCMeta, object)):
''' code behind bin/ansible* programs '''
_ITALIC = re.compile(r"I\(([^)]+)\)")
_BOLD = re.compile(r"B\(([^)]+)\)")
_MODULE = re.compile(r"M\(([^)]+)\)")
_URL = re.compile(r"U\(([^)]+)\)")
_CONST = re.compile(r"C\(([^)]+)\)")
PAGER = 'less'
# -F (quit-if-one-screen) -R (allow raw ansi control chars)
# -S (chop long lines) -X (disable termcap init and de-init)
LESS_OPTS = 'FRSX'
SKIP_INVENTORY_DEFAULTS = False
def __init__(self, args, callback=None):
"""
Base init method for all command line programs
"""
if not args:
raise ValueError('A non-empty list for args is required')
self.args = args
self.parser = None
self.callback = callback
if C.DEVEL_WARNING and __version__.endswith('dev0'):
display.warning(
'You are running the development version of Ansible. You should only run Ansible from "devel" if '
'you are modifying the Ansible engine, or trying out features under development. This is a rapidly '
'changing source of code and can become unstable at any point.'
)
@abstractmethod
def run(self):
"""Run the ansible command
Subclasses must implement this method. It does the actual work of
running an Ansible command.
"""
self.parse()
display.vv(to_text(opt_help.version(self.parser.prog)))
if C.CONFIG_FILE:
display.v(u"Using %s as config file" % to_text(C.CONFIG_FILE))
else:
display.v(u"No config file found; using defaults")
# warn about deprecated config options
for deprecated in C.config.DEPRECATED:
name = deprecated[0]
why = deprecated[1]['why']
if 'alternatives' in deprecated[1]:
alt = ', use %s instead' % deprecated[1]['alternatives']
else:
alt = ''
ver = deprecated[1]['version']
display.deprecated("%s option, %s %s" % (name, why, alt), version=ver)
@staticmethod
def split_vault_id(vault_id):
# return (before_@, after_@)
# if no @, return whole string as after_
if '@' not in vault_id:
return (None, vault_id)
parts = vault_id.split('@', 1)
ret = tuple(parts)
return ret
@staticmethod
def build_vault_ids(vault_ids, vault_password_files=None,
ask_vault_pass=None, create_new_password=None,
auto_prompt=True):
vault_password_files = vault_password_files or []
vault_ids = vault_ids or []
# convert vault_password_files into vault_ids slugs
for password_file in vault_password_files:
id_slug = u'%s@%s' % (C.DEFAULT_VAULT_IDENTITY, password_file)
# note this makes --vault-id higher precedence than --vault-password-file
# if we want to intertwingle them in order probably need a cli callback to populate vault_ids
# used by --vault-id and --vault-password-file
vault_ids.append(id_slug)
# if an action needs an encrypt password (create_new_password=True) and we dont
# have other secrets setup, then automatically add a password prompt as well.
# prompts cant/shouldnt work without a tty, so dont add prompt secrets
if ask_vault_pass or (not vault_ids and auto_prompt):
id_slug = u'%s@%s' % (C.DEFAULT_VAULT_IDENTITY, u'prompt_ask_vault_pass')
vault_ids.append(id_slug)
return vault_ids
# TODO: remove the now unused args
@staticmethod
def setup_vault_secrets(loader, vault_ids, vault_password_files=None,
ask_vault_pass=None, create_new_password=False,
auto_prompt=True):
# list of tuples
vault_secrets = []
# Depending on the vault_id value (including how --ask-vault-pass / --vault-password-file create a vault_id)
# we need to show different prompts. This is for compat with older Towers that expect a
# certain vault password prompt format, so 'promp_ask_vault_pass' vault_id gets the old format.
prompt_formats = {}
# If there are configured default vault identities, they are considered 'first'
# so we prepend them to vault_ids (from cli) here
vault_password_files = vault_password_files or []
if C.DEFAULT_VAULT_PASSWORD_FILE:
vault_password_files.append(C.DEFAULT_VAULT_PASSWORD_FILE)
if create_new_password:
prompt_formats['prompt'] = ['New vault password (%(vault_id)s): ',
'Confirm new vault password (%(vault_id)s): ']
# 2.3 format prompts for --ask-vault-pass
prompt_formats['prompt_ask_vault_pass'] = ['New Vault password: ',
'Confirm New Vault password: ']
else:
prompt_formats['prompt'] = ['Vault password (%(vault_id)s): ']
# The format when we use just --ask-vault-pass needs to match 'Vault password:\s*?$'
prompt_formats['prompt_ask_vault_pass'] = ['Vault password: ']
vault_ids = CLI.build_vault_ids(vault_ids,
vault_password_files,
ask_vault_pass,
create_new_password,
auto_prompt=auto_prompt)
for vault_id_slug in vault_ids:
vault_id_name, vault_id_value = CLI.split_vault_id(vault_id_slug)
if vault_id_value in ['prompt', 'prompt_ask_vault_pass']:
# --vault-id some_name@prompt_ask_vault_pass --vault-id other_name@prompt_ask_vault_pass will be a little
# confusing since it will use the old format without the vault id in the prompt
built_vault_id = vault_id_name or C.DEFAULT_VAULT_IDENTITY
# choose the prompt based on --vault-id=prompt or --ask-vault-pass. --ask-vault-pass
# always gets the old format for Tower compatibility.
# ie, we used --ask-vault-pass, so we need to use the old vault password prompt
# format since Tower needs to match on that format.
prompted_vault_secret = PromptVaultSecret(prompt_formats=prompt_formats[vault_id_value],
vault_id=built_vault_id)
# a empty or invalid password from the prompt will warn and continue to the next
# without erroring globally
try:
prompted_vault_secret.load()
except AnsibleError as exc:
display.warning('Error in vault password prompt (%s): %s' % (vault_id_name, exc))
raise
vault_secrets.append((built_vault_id, prompted_vault_secret))
# update loader with new secrets incrementally, so we can load a vault password
# that is encrypted with a vault secret provided earlier
loader.set_vault_secrets(vault_secrets)
continue
# assuming anything else is a password file
display.vvvvv('Reading vault password file: %s' % vault_id_value)
# read vault_pass from a file
file_vault_secret = get_file_vault_secret(filename=vault_id_value,
vault_id=vault_id_name,
loader=loader)
# an invalid password file will error globally
try:
file_vault_secret.load()
except AnsibleError as exc:
display.warning('Error in vault password file loading (%s): %s' % (vault_id_name, to_text(exc)))
raise
if vault_id_name:
vault_secrets.append((vault_id_name, file_vault_secret))
else:
vault_secrets.append((C.DEFAULT_VAULT_IDENTITY, file_vault_secret))
# update loader with as-yet-known vault secrets
loader.set_vault_secrets(vault_secrets)
return vault_secrets
@staticmethod
def ask_passwords():
''' prompt for connection and become passwords if needed '''
op = context.CLIARGS
sshpass = None
becomepass = None
become_prompt = ''
become_prompt_method = "BECOME" if C.AGNOSTIC_BECOME_PROMPT else op['become_method'].upper()
try:
if op['ask_pass']:
sshpass = getpass.getpass(prompt="SSH password: ")
become_prompt = "%s password[defaults to SSH password]: " % become_prompt_method
else:
become_prompt = "%s password: " % become_prompt_method
if op['become_ask_pass']:
becomepass = getpass.getpass(prompt=become_prompt)
if op['ask_pass'] and becomepass == '':
becomepass = sshpass
except EOFError:
pass
# we 'wrap' the passwords to prevent templating as
# they can contain special chars and trigger it incorrectly
if sshpass:
sshpass = to_unsafe_text(sshpass)
if becomepass:
becomepass = to_unsafe_text(becomepass)
return (sshpass, becomepass)
def validate_conflicts(self, op, runas_opts=False, fork_opts=False):
''' check for conflicting options '''
if fork_opts:
if op.forks < 1:
self.parser.error("The number of processes (--forks) must be >= 1")
return op
@abstractmethod
def init_parser(self, usage="", desc=None, epilog=None):
"""
Create an options parser for most ansible scripts
Subclasses need to implement this method. They will usually call the base class's
init_parser to create a basic version and then add their own options on top of that.
An implementation will look something like this::
def init_parser(self):
super(MyCLI, self).init_parser(usage="My Ansible CLI", inventory_opts=True)
ansible.arguments.option_helpers.add_runas_options(self.parser)
self.parser.add_option('--my-option', dest='my_option', action='store')
"""
self.parser = opt_help.create_base_parser(os.path.basename(self.args[0]), usage=usage, desc=desc, epilog=epilog, )
@abstractmethod
def post_process_args(self, options):
"""Process the command line args
Subclasses need to implement this method. This method validates and transforms the command
line arguments. It can be used to check whether conflicting values were given, whether filenames
exist, etc.
An implementation will look something like this::
def post_process_args(self, options):
options = super(MyCLI, self).post_process_args(options)
if options.addition and options.subtraction:
raise AnsibleOptionsError('Only one of --addition and --subtraction can be specified')
if isinstance(options.listofhosts, string_types):
options.listofhosts = string_types.split(',')
return options
"""
# process tags
if hasattr(options, 'tags') and not options.tags:
# optparse defaults does not do what's expected
options.tags = ['all']
if hasattr(options, 'tags') and options.tags:
tags = set()
for tag_set in options.tags:
for tag in tag_set.split(u','):
tags.add(tag.strip())
options.tags = list(tags)
# process skip_tags
if hasattr(options, 'skip_tags') and options.skip_tags:
skip_tags = set()
for tag_set in options.skip_tags:
for tag in tag_set.split(u','):
skip_tags.add(tag.strip())
options.skip_tags = list(skip_tags)
# process inventory options except for CLIs that require their own processing
if hasattr(options, 'inventory') and not self.SKIP_INVENTORY_DEFAULTS:
if options.inventory:
# should always be list
if isinstance(options.inventory, string_types):
options.inventory = [options.inventory]
# Ensure full paths when needed
options.inventory = [unfrackpath(opt, follow=False) if ',' not in opt else opt for opt in options.inventory]
else:
options.inventory = C.DEFAULT_HOST_LIST
# Dup args set on the root parser and sub parsers results in the root parser ignoring the args. e.g. doing
# 'ansible-galaxy -vvv init' has no verbosity set but 'ansible-galaxy init -vvv' sets a level of 3. To preserve
# back compat with pre-argparse changes we manually scan and set verbosity based on the argv values.
if self.parser.prog in ['ansible-galaxy', 'ansible-vault'] and not options.verbosity:
verbosity_arg = next(iter([arg for arg in self.args if arg.startswith('-v')]), None)
if verbosity_arg:
display.deprecated("Setting verbosity before the arg sub command is deprecated, set the verbosity "
"after the sub command", "2.13")
options.verbosity = verbosity_arg.count('v')
return options
def parse(self):
"""Parse the command line args
This method parses the command line arguments. It uses the parser
stored in the self.parser attribute and saves the args and options in
context.CLIARGS.
Subclasses need to implement two helper methods, init_parser() and post_process_args() which
are called from this function before and after parsing the arguments.
"""
self.init_parser()
if HAS_ARGCOMPLETE:
argcomplete.autocomplete(self.parser)
options = self.parser.parse_args(self.args[1:])
options = self.post_process_args(options)
context._init_global_context(options)
@staticmethod
def version_info(gitinfo=False):
''' return full ansible version info '''
if gitinfo:
# expensive call, user with care
ansible_version_string = opt_help.version()
else:
ansible_version_string = __version__
ansible_version = ansible_version_string.split()[0]
ansible_versions = ansible_version.split('.')
for counter in range(len(ansible_versions)):
if ansible_versions[counter] == "":
ansible_versions[counter] = 0
try:
ansible_versions[counter] = int(ansible_versions[counter])
except Exception:
pass
if len(ansible_versions) < 3:
for counter in range(len(ansible_versions), 3):
ansible_versions.append(0)
return {'string': ansible_version_string.strip(),
'full': ansible_version,
'major': ansible_versions[0],
'minor': ansible_versions[1],
'revision': ansible_versions[2]}
@staticmethod
def pager(text):
''' find reasonable way to display text '''
# this is a much simpler form of what is in pydoc.py
if not sys.stdout.isatty():
display.display(text, screen_only=True)
elif 'PAGER' in os.environ:
if sys.platform == 'win32':
display.display(text, screen_only=True)
else:
CLI.pager_pipe(text, os.environ['PAGER'])
else:
p = subprocess.Popen('less --version', shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.communicate()
if p.returncode == 0:
CLI.pager_pipe(text, 'less')
else:
display.display(text, screen_only=True)
@staticmethod
def pager_pipe(text, cmd):
''' pipe text through a pager '''
if 'LESS' not in os.environ:
os.environ['LESS'] = CLI.LESS_OPTS
try:
cmd = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=sys.stdout)
cmd.communicate(input=to_bytes(text))
except IOError:
pass
except KeyboardInterrupt:
pass
@classmethod
def tty_ify(cls, text):
t = cls._ITALIC.sub("`" + r"\1" + "'", text) # I(word) => `word'
t = cls._BOLD.sub("*" + r"\1" + "*", t) # B(word) => *word*
t = cls._MODULE.sub("[" + r"\1" + "]", t) # M(word) => [word]
t = cls._URL.sub(r"\1", t) # U(word) => word
t = cls._CONST.sub("`" + r"\1" + "'", t) # C(word) => `word'
return t
@staticmethod
def _play_prereqs():
options = context.CLIARGS
# all needs loader
loader = DataLoader()
basedir = options.get('basedir', False)
if basedir:
loader.set_basedir(basedir)
add_all_plugin_dirs(basedir)
AnsibleCollectionConfig.playbook_paths = basedir
default_collection = _get_collection_name_from_path(basedir)
if default_collection:
display.warning(u'running with default collection {0}'.format(default_collection))
AnsibleCollectionConfig.default_collection = default_collection
vault_ids = list(options['vault_ids'])
default_vault_ids = C.DEFAULT_VAULT_IDENTITY_LIST
vault_ids = default_vault_ids + vault_ids
vault_secrets = CLI.setup_vault_secrets(loader,
vault_ids=vault_ids,
vault_password_files=list(options['vault_password_files']),
ask_vault_pass=options['ask_vault_pass'],
auto_prompt=False)
loader.set_vault_secrets(vault_secrets)
# create the inventory, and filter it based on the subset specified (if any)
inventory = InventoryManager(loader=loader, sources=options['inventory'])
# create the variable manager, which will be shared throughout
# the code, ensuring a consistent view of global variables
variable_manager = VariableManager(loader=loader, inventory=inventory, version_info=CLI.version_info(gitinfo=False))
return loader, inventory, variable_manager
@staticmethod
def get_host_list(inventory, subset, pattern='all'):
no_hosts = False
if len(inventory.list_hosts()) == 0:
# Empty inventory
if C.LOCALHOST_WARNING and pattern not in C.LOCALHOST:
display.warning("provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'")
no_hosts = True
inventory.subset(subset)
hosts = inventory.list_hosts(pattern)
if not hosts and no_hosts is False:
raise AnsibleError("Specified hosts and/or --limit does not match any hosts")
return hosts
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,900 |
get_certificate always checks availability of pyopenssl backend
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
If pyOpenssl is unavailable `get_certificate` module fails with error about missing `pyopenssl` even when `cryptography` backend is available, or selected using `select_crypto_backend`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
get_certificate
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.5
config file = /home/gp/.ansible.cfg
configured module search path = ['/home/gp/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/gp/.local/share/virtualenvs/ansible-site-jT6p0Ljs/lib/python3.7/site-packages/ansible
executable location = /home/gp/.local/share/virtualenvs/ansible-site-jT6p0Ljs/bin/ansible
python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
CACHE_PLUGIN(/home/gp/.ansible.cfg) = redis
CACHE_PLUGIN_CONNECTION(/home/gp/.ansible.cfg) = 127.0.0.1:6379:0
CACHE_PLUGIN_TIMEOUT(/home/gp/.ansible.cfg) = 300
CONDITIONAL_BARE_VARS(/home/gp/.ansible.cfg) = False
DEFAULT_HOST_LIST(/home/gp/.ansible.cfg) = ['/home/gp/infra']
DEFAULT_JINJA2_NATIVE(/home/gp/.ansible.cfg) = True
DEFAULT_REMOTE_USER(/home/gp/.ansible.cfg) = root
INTERPRETER_PYTHON(/home/gp/.ansible.cfg) = auto
INVENTORY_ENABLED(/home/gp/.ansible.cfg) = ['auto', 'yaml', 'ini']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Controller and managed host is the same one, Debian 10 buster using python from a virtualenv created using the system python. In the managed host virtualenv, Ansible 2.9.5 is installed into it as well as cryptography 2.6. pyopenssl is not installed in this virtualenv nor on the system python of the target host. Host uses system package python3-cryptography.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Try to use `get_certificate` with `select_crypto_backend: cryptography`
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
connection: local
gather_facts: no
vars:
servers:
- host: example.net
- host: example.org
tasks:
- name: Test HTTPS download
get_certificate:
host: "{{ item.host }}"
port: "{{ item.port | default(443) | int }}"
select_crypto_backend: cryptography
loop: "{{ servers }}"
run_once: true
register: reg_cert_check
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
It should use the cryptography backend instead of complaining about missing pyopenssl >= 0.15
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_get_certificate_payload_pwc38k3u/ansible_get_certificate_payload.zip/ansible/modules/crypto/get_certificate.py", line 185, in <module>
ModuleNotFoundError: No module named 'OpenSSL'
failed: [localhost] (item={'host': 'example.org'}) => {
"ansible_loop_var": "item",
"changed": false,
"invocation": {
"module_args": {
"ca_cert": null,
"host": "example.org",
"port": 443,
"proxy_host": null,
"proxy_port": 8080,
"select_crypto_backend": "cryptography",
"timeout": 10
}
},
"item": {
"host": "example.org"
},
"msg": "Failed to import the required Python library (pyOpenSSL >= 0.15) on platinum's Python /home/gp/.local/share/virtualenvs/ansible-site-jT6p0Ljs/bin/python3.7m. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter"
}
```
|
https://github.com/ansible/ansible/issues/67900
|
https://github.com/ansible/ansible/pull/69268
|
0894ea1b1d50ef0f51f0ac7312fb9ec3d7d75872
|
1e01ac413b874d77cab74457ab6b38f6a1d5becb
| 2020-03-01T19:51:21Z |
python
| 2020-05-28T20:56:26Z |
docs/docsite/rst/community/development_process.rst
|
.. _community_development_process:
*****************************
The Ansible Development Cycle
*****************************
The Ansible development cycle happens on two levels. At a macro level, the team plans releases and tracks progress with roadmaps and projects. At a micro level, each PR has its own lifecycle.
.. contents::
:local:
Macro development: roadmaps, releases, and projects
===================================================
If you want to follow the conversation about what features will be added to Ansible for upcoming releases and what bugs are being fixed, you can watch these resources:
* the :ref:`roadmaps`
* the :ref:`Ansible Release Schedule <release_and_maintenance>`
* various GitHub `projects <https://github.com/ansible/ansible/projects>`_ - for example:
* the `2.10 release project <https://github.com/ansible/ansible/projects/39>`_
* the `network bugs project <https://github.com/ansible/ansible/projects/20>`_
* the `core documentation project <https://github.com/ansible/ansible/projects/27>`_
.. _community_pull_requests:
Micro development: the lifecycle of a PR
========================================
Ansible accepts code through **pull requests** ("PRs" for short). GitHub provides a great overview of `how the pull request process works <https://help.github.com/articles/about-pull-requests/>`_ in general. The ultimate goal of any pull request is to get merged and become part of Ansible Core.
Here's an overview of the PR lifecycle:
* Contributor opens a PR
* Ansibot reviews the PR
* Ansibot assigns labels
* Ansibot pings maintainers
* Shippable runs the test suite
* Developers, maintainers, community review the PR
* Contributor addresses any feedback from reviewers
* Developers, maintainers, community re-review
* PR merged or closed
Automated PR review: ansibullbot
--------------------------------
Because Ansible receives many pull requests, and because we love automating things, we've automated several steps of the process of reviewing and merging pull requests with a tool called Ansibullbot, or Ansibot for short.
`Ansibullbot <https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md>`_ serves many functions:
- Responds quickly to PR submitters to thank them for submitting their PR
- Identifies the community maintainer responsible for reviewing PRs for any files affected
- Tracks the current status of PRs
- Pings responsible parties to remind them of any PR actions for which they may be responsible
- Provides maintainers with the ability to move PRs through the workflow
- Identifies PRs abandoned by their submitters so that we can close them
- Identifies modules abandoned by their maintainers so that we can find new maintainers
Ansibot workflow
^^^^^^^^^^^^^^^^
Ansibullbot runs continuously. You can generally expect to see changes to your issue or pull request within thirty minutes. Ansibullbot examines every open pull request in the repositories, and enforces state roughly according to the following workflow:
- If a pull request has no workflow labels, it's considered **new**. Files in the pull request are identified, and the maintainers of those files are pinged by the bot, along with instructions on how to review the pull request. (Note: sometimes we strip labels from a pull request to "reboot" this process.)
- If the module maintainer is not ``$team_ansible``, the pull request then goes into the **community_review** state.
- If the module maintainer is ``$team_ansible``, the pull request then goes into the **core_review** state (and probably sits for a while).
- If the pull request is in **community_review** and has received comments from the maintainer:
- If the maintainer says ``shipit``, the pull request is labeled **shipit**, whereupon the Core team assesses it for final merge.
- If the maintainer says ``needs_info``, the pull request is labeled **needs_info** and the submitter is asked for more info.
- If the maintainer says **needs_revision**, the pull request is labeled **needs_revision** and the submitter is asked to fix some things.
- If the submitter says ``ready_for_review``, the pull request is put back into **community_review** or **core_review** and the maintainer is notified that the pull request is ready to be reviewed again.
- If the pull request is labeled **needs_revision** or **needs_info** and the submitter has not responded lately:
- The submitter is first politely pinged after two weeks, pinged again after two more weeks and labeled **pending action**, and the issue or pull request will be closed two weeks after that.
- If the submitter responds at all, the clock is reset.
- If the pull request is labeled **community_review** and the reviewer has not responded lately:
- The reviewer is first politely pinged after two weeks, pinged again after two more weeks and labeled **pending_action**, and then may be reassigned to ``$team_ansible`` or labeled **core_review**, or often the submitter of the pull request is asked to step up as a maintainer.
- If Shippable tests fail, or if the code is not able to be merged, the pull request is automatically put into **needs_revision** along with a message to the submitter explaining why.
There are corner cases and frequent refinements, but this is the workflow in general.
PR labels
^^^^^^^^^
There are two types of PR Labels generally: **workflow** labels and **information** labels.
Workflow labels
"""""""""""""""
- **community_review**: Pull requests for modules that are currently awaiting review by their maintainers in the Ansible community.
- **core_review**: Pull requests for modules that are currently awaiting review by their maintainers on the Ansible Core team.
- **needs_info**: Waiting on info from the submitter.
- **needs_rebase**: Waiting on the submitter to rebase.
- **needs_revision**: Waiting on the submitter to make changes.
- **shipit**: Waiting for final review by the core team for potential merge.
Information labels
""""""""""""""""""
- **backport**: this is applied automatically if the PR is requested against any branch that is not devel. The bot immediately assigns the labels backport and ``core_review``.
- **bugfix_pull_request**: applied by the bot based on the templatized description of the PR.
- **cloud**: applied by the bot based on the paths of the modified files.
- **docs_pull_request**: applied by the bot based on the templatized description of the PR.
- **easyfix**: applied manually, inconsistently used but sometimes useful.
- **feature_pull_request**: applied by the bot based on the templatized description of the PR.
- **networking**: applied by the bot based on the paths of the modified files.
- **owner_pr**: largely deprecated. Formerly workflow, now informational. Originally, PRs submitted by the maintainer would automatically go to **shipit** based on this label. If the submitter is also a maintainer, we notify the other maintainers and still require one of the maintainers (including the submitter) to give a **shipit**.
- **pending_action**: applied by the bot to PRs that are not moving. Reviewed every couple of weeks by the community team, who tries to figure out the appropriate action (closure, asking for new maintainers, and so on).
Special Labels
""""""""""""""
- **new_plugin**: this is for new modules or plugins that are not yet in Ansible.
**Note:** `new_plugin` kicks off a completely separate process, and frankly it doesn't work very well at present. We're working our best to improve this process.
Human PR review
---------------
After Ansibot reviews the PR and applies labels, the PR is ready for human review. The most likely reviewers for any PR are the maintainers for the module that PR modifies.
Each module has at least one assigned :ref:`maintainer <maintainers>`, listed in the `BOTMETA.yml <https://github.com/ansible/ansible/blob/devel/.github/BOTMETA.yml>`_ file.
The maintainer's job is to review PRs that affect that module and decide whether they should be merged (``shipit``) or revised (``needs_revision``). We'd like to have at least one community maintainer for every module. If a module has no community maintainers assigned, the maintainer is listed as ``$team_ansible``.
Once a human applies the ``shipit`` label, the :ref:`committers <community_committer_guidelines>` decide whether the PR is ready to be merged. Not every PR that gets the ``shipit`` label is actually ready to be merged, but the better our reviewers are, and the better our guidelines are, the more likely it will be that a PR that reaches **shipit** will be mergeable.
Making your PR merge-worthy
===========================
We don't merge every PR. Here are some tips for making your PR useful, attractive, and merge-worthy.
.. _community_changelogs:
Changelogs
----------
Changelogs help users and developers keep up with changes to Ansible.
Ansible builds a changelog for each release from fragments. You **must** add a changelog fragment to any PR that changes functionality or fixes a bug.
You don't have to add a changelog fragment for PRs that add new
modules and plugins, because our tooling does that for you automatically.
We build short summary changelogs for minor releases as well as for major releases. If you backport a bugfix, include a changelog fragment with the backport PR.
.. _changelogs_how_to:
Creating a changelog fragment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A basic changelog fragment is a ``.yaml`` file placed in the
``changelogs/fragments/`` directory. Each file contains a yaml dict with
keys like ``bugfixes`` or ``major_changes`` followed by a list of
changelog entries of bugfixes or features. Each changelog entry is
rst embedded inside of the yaml file which means that certain
constructs would need to be escaped so they can be interpreted by rst
and not by yaml (or escaped for both yaml and rst if that's your
desire). Each PR **must** use a new fragment file rather than adding to
an existing one, so we can trace the change back to the PR that introduced it.
To create a changelog entry, create a new file with a unique name in the ``changelogs/fragments/`` directory. The file name should include the PR number and a description of the change. It must end with the file extension ``.yaml``. For example: ``40696-user-backup-shadow-file.yaml``
A single changelog fragment may contain multiple sections but most will only contain one section.
The toplevel keys (bugfixes, major_changes, and so on) are defined in the
`config file <https://github.com/ansible/ansible/blob/devel/changelogs/config.yaml>`_ for our release note tool. Here are the valid sections and a description of each:
**major_changes**
Major changes to Ansible itself. Generally does not include module or plugin changes.
**minor_changes**
Minor changes to Ansible, modules, or plugins. This includes new features, new parameters added to modules, or behavior changes to existing parameters.
**deprecated_features**
Features that have been deprecated and are scheduled for removal in a future release.
**removed_features**
Features that were previously deprecated and are now removed.
**bugfixes**
Fixes that resolve issues. If there is a specific issue related to this bugfix, add a link in the changelog entry.
**known_issues**
Known issues that are currently not fixed or will not be fixed.
Most changelog entries will be ``bugfixes`` or ``minor_changes``. When writing a changelog entry that pertains to a particular module, start the entry with ``- [module name] -`` and include a link to the related issue if one exists.
Here are some examples:
.. code-block:: yaml
bugfixes:
- win_updates - fixed issue where running win_updates on async fails without any error
.. code-block:: yaml
minor_changes:
- lineinfile - add warning when using an empty regexp (https://github.com/ansible/ansible/issues/29443)
.. code-block:: yaml
bugfixes:
- copy module - The copy module was attempting to change the mode of files for
remote_src=True even if mode was not set as a parameter. This failed on
filesystems which do not have permission bits.
You can find more example changelog fragments in the `changelog directory <https://github.com/ansible/ansible/tree/stable-2.6/changelogs/fragments>`_ for the 2.6 release. You can also find documentation of the format, including hints on embedding rst in the yaml, in the `reno documentation <https://docs.openstack.org/reno/latest/user/usage.html#editing-a-release-note>`_.
Once you've written the changelog fragment for your PR, commit the file and include it with the pull request.
.. _backport_process:
Backporting merged PRs
======================
All Ansible PRs must be merged to the ``devel`` branch first.
After a pull request has been accepted and merged to the ``devel`` branch, the following instructions will help you create a
pull request to backport the change to a previous stable branch.
We do **not** backport features.
.. note::
These instructions assume that:
* ``stable-2.9`` is the targeted release branch for the backport
* ``https://github.com/ansible/ansible.git`` is configured as a
``git remote`` named ``upstream``. If you do not use
a ``git remote`` named ``upstream``, adjust the instructions accordingly.
* ``https://github.com/<yourgithubaccount>/ansible.git``
is configured as a ``git remote`` named ``origin``. If you do not use
a ``git remote`` named ``origin``, adjust the instructions accordingly.
#. Prepare your devel, stable, and feature branches:
::
git fetch upstream
git checkout -b backport/2.9/[PR_NUMBER_FROM_DEVEL] upstream/stable-2.9
#. Cherry pick the relevant commit SHA from the devel branch into your feature
branch, handling merge conflicts as necessary:
::
git cherry-pick -x [SHA_FROM_DEVEL]
#. Add a :ref:`changelog fragment <changelogs_how_to>` for the change, and commit it.
#. Push your feature branch to your fork on GitHub:
::
git push origin backport/2.9/[PR_NUMBER_FROM_DEVEL]
#. Submit the pull request for ``backport/2.9/[PR_NUMBER_FROM_DEVEL]``
against the ``stable-2.9`` branch
#. The Release Manager will decide whether to merge the backport PR before
the next minor release. There isn't any need to follow up. Just ensure that the automated
tests (CI) are green.
.. note::
The choice to use ``backport/2.9/[PR_NUMBER_FROM_DEVEL]`` as the
name for the feature branch is somewhat arbitrary, but conveys meaning
about the purpose of that branch. It is not required to use this format,
but it can be helpful, especially when making multiple backport PRs for
multiple stable branches.
.. note::
If you prefer, you can use CPython's cherry-picker tool
(``pip install --user 'cherry-picker >= 1.3.2'``) to backport commits
from devel to stable branches in Ansible. Take a look at the `cherry-picker
documentation <https://pypi.org/p/cherry-picker#cherry-picking>`_ for
details on installing, configuring, and using it.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,529 |
Multiple when conditions including a defined check fails if written on a single line
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
I've spent the better part of a working day troubleshooting this specific issue, and I don't think this issue is documented or already reported. I have a dictionary, which *sometimes* contains a list of dicts to be looped on, depending on what a key in the inside dict is set to.
So obviously I want to skip the task entirely when that list isn't present, and that is not a problem. But when combined with a check for a key value on item inside the loop, the when condition fails complaining about the list being undefined. Which is hugely annoying as I was checking explicitly for that in the first condition of my when statement.
This is part of a larger playbook which I cannot share, but I've managed to reproduce it with a small test case. I will paste two small playbooks, one that works and one that doesn't. (But in my opinion it should.)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible-playbook 2.8.0a1.post0
config file = /Users/jh/git/ansible_cisco/ansible.cfg
configured module search path = [u'/Users/jh/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/jh/git/ansible_cisco/lib/python2.7/site-packages/ansible
executable location = ./bin/ansible-playbook
python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
DEFAULT_STDOUT_CALLBACK(/Users/jh/git/ansible_cisco/ansible.cfg) = skippy
DEFAULT_TIMEOUT(/Users/jh/git/ansible_cisco/ansible.cfg) = 300
HOST_KEY_CHECKING(/Users/jh/git/ansible_cisco/ansible.cfg) = False
INTERPRETER_PYTHON(/Users/jh/git/ansible_cisco/ansible.cfg) = ./bin/python
PARAMIKO_LOOK_FOR_KEYS(/Users/jh/git/ansible_cisco/ansible.cfg) = False
PERSISTENT_COMMAND_TIMEOUT(/Users/jh/git/ansible_cisco/ansible.cfg) = 300
PERSISTENT_CONNECT_TIMEOUT(/Users/jh/git/ansible_cisco/ansible.cfg) = 300
RETRY_FILES_ENABLED(/Users/jh/git/ansible_cisco/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(/Users/jh/git/ansible_cisco/ansible.cfg) = ignore
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Running MacOS X Mojave and use a python virtualenv for the sole purpose of having stable versions of everything in this specific repository. None of this should matter though as this seems to be a purely ansible core thing.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
This playbook will fail with an error about mylist2 not being defined, despite the when statement explicitly checking for that. Remove the and item.test = test part from the when statement and it will work correctly.
```
---
- name: test ansible when defined
hosts: localhost
become: no
gather_facts: no
vars:
mydict:
var1: "val1"
var2: "val2"
mylist1:
- test1
- test2
tasks:
- debug:
var: mydict
- debug:
msg: "This shouldn't be printed."
with_items: "{{mydict.mylist2}}"
when: mydict.mylist2 is defined and item.test = 'test'
```
This version, with the parts of the when statement on different lines works:
```
---
- name: test ansible when defined
hosts: localhost
become: no
gather_facts: no
vars:
mydict:
var1: "val1"
var2: "val2"
mylist1:
- test1
- test2
tasks:
- debug:
var: mydict
- debug:
msg: "This shouldn't be printed."
with_items: "{{mydict.mylist2}}"
when:
- mydict.mylist2 is defined
- item.test = 'test'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I would expect both cases to work, and just skip the second task.
```
./bin/ansible-playbook works.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not
match 'all'
PLAY [test ansible when defined] *************************************************************************************
TASK [debug] *********************************************************************************************************
ok: [localhost] => {
"mydict": {
"mylist1": [
"test1",
"test2"
],
"var1": "val1",
"var2": "val2"
}
}
TASK [debug] *********************************************************************************************************
PLAY RECAP ***********************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
./bin/ansible-playbook fails.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not
match 'all'
PLAY [test ansible when defined] *************************************************************************************
TASK [debug] *********************************************************************************************************
ok: [localhost] => {
"mydict": {
"mylist1": [
"test1",
"test2"
],
"var1": "val1",
"var2": "val2"
}
}
TASK [debug] *********************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "'dict object' has no attribute 'mylist2'"}
PLAY RECAP ***********************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/66529
|
https://github.com/ansible/ansible/pull/68485
|
1e01ac413b874d77cab74457ab6b38f6a1d5becb
|
061c6c7c6fdb26ca572eba2ccff2557106435c44
| 2020-01-16T12:48:07Z |
python
| 2020-05-28T21:13:39Z |
changelogs/fragments/66529-display-both-loop-and-cond-errors.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,529 |
Multiple when conditions including a defined check fails if written on a single line
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
I've spent the better part of a working day troubleshooting this specific issue, and I don't think this issue is documented or already reported. I have a dictionary, which *sometimes* contains a list of dicts to be looped on, depending on what a key in the inside dict is set to.
So obviously I want to skip the task entirely when that list isn't present, and that is not a problem. But when combined with a check for a key value on item inside the loop, the when condition fails complaining about the list being undefined. Which is hugely annoying as I was checking explicitly for that in the first condition of my when statement.
This is part of a larger playbook which I cannot share, but I've managed to reproduce it with a small test case. I will paste two small playbooks, one that works and one that doesn't. (But in my opinion it should.)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible-playbook 2.8.0a1.post0
config file = /Users/jh/git/ansible_cisco/ansible.cfg
configured module search path = [u'/Users/jh/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/jh/git/ansible_cisco/lib/python2.7/site-packages/ansible
executable location = ./bin/ansible-playbook
python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
DEFAULT_STDOUT_CALLBACK(/Users/jh/git/ansible_cisco/ansible.cfg) = skippy
DEFAULT_TIMEOUT(/Users/jh/git/ansible_cisco/ansible.cfg) = 300
HOST_KEY_CHECKING(/Users/jh/git/ansible_cisco/ansible.cfg) = False
INTERPRETER_PYTHON(/Users/jh/git/ansible_cisco/ansible.cfg) = ./bin/python
PARAMIKO_LOOK_FOR_KEYS(/Users/jh/git/ansible_cisco/ansible.cfg) = False
PERSISTENT_COMMAND_TIMEOUT(/Users/jh/git/ansible_cisco/ansible.cfg) = 300
PERSISTENT_CONNECT_TIMEOUT(/Users/jh/git/ansible_cisco/ansible.cfg) = 300
RETRY_FILES_ENABLED(/Users/jh/git/ansible_cisco/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(/Users/jh/git/ansible_cisco/ansible.cfg) = ignore
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Running MacOS X Mojave and use a python virtualenv for the sole purpose of having stable versions of everything in this specific repository. None of this should matter though as this seems to be a purely ansible core thing.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
This playbook will fail with an error about mylist2 not being defined, despite the when statement explicitly checking for that. Remove the and item.test = test part from the when statement and it will work correctly.
```
---
- name: test ansible when defined
hosts: localhost
become: no
gather_facts: no
vars:
mydict:
var1: "val1"
var2: "val2"
mylist1:
- test1
- test2
tasks:
- debug:
var: mydict
- debug:
msg: "This shouldn't be printed."
with_items: "{{mydict.mylist2}}"
when: mydict.mylist2 is defined and item.test = 'test'
```
This version, with the parts of the when statement on different lines works:
```
---
- name: test ansible when defined
hosts: localhost
become: no
gather_facts: no
vars:
mydict:
var1: "val1"
var2: "val2"
mylist1:
- test1
- test2
tasks:
- debug:
var: mydict
- debug:
msg: "This shouldn't be printed."
with_items: "{{mydict.mylist2}}"
when:
- mydict.mylist2 is defined
- item.test = 'test'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I would expect both cases to work, and just skip the second task.
```
./bin/ansible-playbook works.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not
match 'all'
PLAY [test ansible when defined] *************************************************************************************
TASK [debug] *********************************************************************************************************
ok: [localhost] => {
"mydict": {
"mylist1": [
"test1",
"test2"
],
"var1": "val1",
"var2": "val2"
}
}
TASK [debug] *********************************************************************************************************
PLAY RECAP ***********************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
./bin/ansible-playbook fails.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not
match 'all'
PLAY [test ansible when defined] *************************************************************************************
TASK [debug] *********************************************************************************************************
ok: [localhost] => {
"mydict": {
"mylist1": [
"test1",
"test2"
],
"var1": "val1",
"var2": "val2"
}
}
TASK [debug] *********************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "'dict object' has no attribute 'mylist2'"}
PLAY RECAP ***********************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/66529
|
https://github.com/ansible/ansible/pull/68485
|
1e01ac413b874d77cab74457ab6b38f6a1d5becb
|
061c6c7c6fdb26ca572eba2ccff2557106435c44
| 2020-01-16T12:48:07Z |
python
| 2020-05-28T21:13:39Z |
lib/ansible/executor/task_executor.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import re
import pty
import time
import json
import signal
import subprocess
import sys
import termios
import traceback
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleConnectionFailure, AnsibleActionFail, AnsibleActionSkip
from ansible.executor.task_result import TaskResult
from ansible.executor.module_common import get_action_args_with_defaults
from ansible.module_utils.six import iteritems, string_types, binary_type
from ansible.module_utils.six.moves import xrange
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.connection import write_to_file_descriptor
from ansible.playbook.conditional import Conditional
from ansible.playbook.task import Task
from ansible.plugins.loader import become_loader, cliconf_loader, connection_loader, httpapi_loader, netconf_loader, terminal_loader
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.unsafe_proxy import to_unsafe_text, wrap_var
from ansible.vars.clean import namespace_facts, clean_facts
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars, isidentifier
display = Display()
__all__ = ['TaskExecutor']
class TaskTimeoutError(BaseException):
pass
def task_timeout(signum, frame):
raise TaskTimeoutError
def remove_omit(task_args, omit_token):
'''
Remove args with a value equal to the ``omit_token`` recursively
to align with now having suboptions in the argument_spec
'''
if not isinstance(task_args, dict):
return task_args
new_args = {}
for i in iteritems(task_args):
if i[1] == omit_token:
continue
elif isinstance(i[1], dict):
new_args[i[0]] = remove_omit(i[1], omit_token)
elif isinstance(i[1], list):
new_args[i[0]] = [remove_omit(v, omit_token) for v in i[1]]
else:
new_args[i[0]] = i[1]
return new_args
class TaskExecutor:
'''
This is the main worker class for the executor pipeline, which
handles loading an action plugin to actually dispatch the task to
a given host. This class roughly corresponds to the old Runner()
class.
'''
# Modules that we optimize by squashing loop items into a single call to
# the module
SQUASH_ACTIONS = frozenset(C.DEFAULT_SQUASH_ACTIONS)
def __init__(self, host, task, job_vars, play_context, new_stdin, loader, shared_loader_obj, final_q):
self._host = host
self._task = task
self._job_vars = job_vars
self._play_context = play_context
self._new_stdin = new_stdin
self._loader = loader
self._shared_loader_obj = shared_loader_obj
self._connection = None
self._final_q = final_q
self._loop_eval_error = None
self._task.squash()
def run(self):
'''
The main executor entrypoint, where we determine if the specified
task requires looping and either runs the task with self._run_loop()
or self._execute(). After that, the returned results are parsed and
returned as a dict.
'''
display.debug("in run() - task %s" % self._task._uuid)
try:
try:
items = self._get_loop_items()
except AnsibleUndefinedVariable as e:
# save the error raised here for use later
items = None
self._loop_eval_error = e
if items is not None:
if len(items) > 0:
item_results = self._run_loop(items)
# create the overall result item
res = dict(results=item_results)
# loop through the item results, and set the global changed/failed result flags based on any item.
for item in item_results:
if 'changed' in item and item['changed'] and not res.get('changed'):
res['changed'] = True
if 'failed' in item and item['failed']:
item_ignore = item.pop('_ansible_ignore_errors')
if not res.get('failed'):
res['failed'] = True
res['msg'] = 'One or more items failed'
self._task.ignore_errors = item_ignore
elif self._task.ignore_errors and not item_ignore:
self._task.ignore_errors = item_ignore
# ensure to accumulate these
for array in ['warnings', 'deprecations']:
if array in item and item[array]:
if array not in res:
res[array] = []
if not isinstance(item[array], list):
item[array] = [item[array]]
res[array] = res[array] + item[array]
del item[array]
if not res.get('Failed', False):
res['msg'] = 'All items completed'
else:
res = dict(changed=False, skipped=True, skipped_reason='No items in the list', results=[])
else:
display.debug("calling self._execute()")
res = self._execute()
display.debug("_execute() done")
# make sure changed is set in the result, if it's not present
if 'changed' not in res:
res['changed'] = False
def _clean_res(res, errors='surrogate_or_strict'):
if isinstance(res, binary_type):
return to_unsafe_text(res, errors=errors)
elif isinstance(res, dict):
for k in res:
try:
res[k] = _clean_res(res[k], errors=errors)
except UnicodeError:
if k == 'diff':
# If this is a diff, substitute a replacement character if the value
# is undecodable as utf8. (Fix #21804)
display.warning("We were unable to decode all characters in the module return data."
" Replaced some in an effort to return as much as possible")
res[k] = _clean_res(res[k], errors='surrogate_then_replace')
else:
raise
elif isinstance(res, list):
for idx, item in enumerate(res):
res[idx] = _clean_res(item, errors=errors)
return res
display.debug("dumping result to json")
res = _clean_res(res)
display.debug("done dumping result, returning")
return res
except AnsibleError as e:
return dict(failed=True, msg=wrap_var(to_text(e, nonstring='simplerepr')), _ansible_no_log=self._play_context.no_log)
except Exception as e:
return dict(failed=True, msg='Unexpected failure during module execution.', exception=to_text(traceback.format_exc()),
stdout='', _ansible_no_log=self._play_context.no_log)
finally:
try:
self._connection.close()
except AttributeError:
pass
except Exception as e:
display.debug(u"error closing connection: %s" % to_text(e))
def _get_loop_items(self):
'''
Loads a lookup plugin to handle the with_* portion of a task (if specified),
and returns the items result.
'''
# save the play context variables to a temporary dictionary,
# so that we can modify the job vars without doing a full copy
# and later restore them to avoid modifying things too early
play_context_vars = dict()
self._play_context.update_vars(play_context_vars)
old_vars = dict()
for k in play_context_vars:
if k in self._job_vars:
old_vars[k] = self._job_vars[k]
self._job_vars[k] = play_context_vars[k]
# get search path for this task to pass to lookup plugins
self._job_vars['ansible_search_path'] = self._task.get_search_path()
# ensure basedir is always in (dwim already searches here but we need to display it)
if self._loader.get_basedir() not in self._job_vars['ansible_search_path']:
self._job_vars['ansible_search_path'].append(self._loader.get_basedir())
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=self._job_vars)
items = None
loop_cache = self._job_vars.get('_ansible_loop_cache')
if loop_cache is not None:
# _ansible_loop_cache may be set in `get_vars` when calculating `delegate_to`
# to avoid reprocessing the loop
items = loop_cache
elif self._task.loop_with:
if self._task.loop_with in self._shared_loader_obj.lookup_loader:
fail = True
if self._task.loop_with == 'first_found':
# first_found loops are special. If the item is undefined then we want to fall through to the next value rather than failing.
fail = False
loop_terms = listify_lookup_plugin_terms(terms=self._task.loop, templar=templar, loader=self._loader, fail_on_undefined=fail,
convert_bare=False)
if not fail:
loop_terms = [t for t in loop_terms if not templar.is_template(t)]
# get lookup
mylookup = self._shared_loader_obj.lookup_loader.get(self._task.loop_with, loader=self._loader, templar=templar)
# give lookup task 'context' for subdir (mostly needed for first_found)
for subdir in ['template', 'var', 'file']: # TODO: move this to constants?
if subdir in self._task.action:
break
setattr(mylookup, '_subdir', subdir + 's')
# run lookup
items = wrap_var(mylookup.run(terms=loop_terms, variables=self._job_vars, wantlist=True))
else:
raise AnsibleError("Unexpected failure in finding the lookup named '%s' in the available lookup plugins" % self._task.loop_with)
elif self._task.loop is not None:
items = templar.template(self._task.loop)
if not isinstance(items, list):
raise AnsibleError(
"Invalid data passed to 'loop', it requires a list, got this instead: %s."
" Hint: If you passed a list/dict of just one element,"
" try adding wantlist=True to your lookup invocation or use q/query instead of lookup." % items
)
# now we restore any old job variables that may have been modified,
# and delete them if they were in the play context vars but not in
# the old variables dictionary
for k in play_context_vars:
if k in old_vars:
self._job_vars[k] = old_vars[k]
else:
del self._job_vars[k]
return items
def _run_loop(self, items):
'''
Runs the task with the loop items specified and collates the result
into an array named 'results' which is inserted into the final result
along with the item for which the loop ran.
'''
results = []
# make copies of the job vars and task so we can add the item to
# the variables and re-validate the task with the item variable
# task_vars = self._job_vars.copy()
task_vars = self._job_vars
loop_var = 'item'
index_var = None
label = None
loop_pause = 0
extended = False
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=self._job_vars)
# FIXME: move this to the object itself to allow post_validate to take care of templating (loop_control.post_validate)
if self._task.loop_control:
loop_var = templar.template(self._task.loop_control.loop_var)
index_var = templar.template(self._task.loop_control.index_var)
loop_pause = templar.template(self._task.loop_control.pause)
extended = templar.template(self._task.loop_control.extended)
# This may be 'None',so it is templated below after we ensure a value and an item is assigned
label = self._task.loop_control.label
# ensure we always have a label
if label is None:
label = '{{' + loop_var + '}}'
if loop_var in task_vars:
display.warning(u"The loop variable '%s' is already in use. "
u"You should set the `loop_var` value in the `loop_control` option for the task"
u" to something else to avoid variable collisions and unexpected behavior." % loop_var)
ran_once = False
if self._task.loop_with:
# Only squash with 'with_:' not with the 'loop:', 'magic' squashing can be removed once with_ loops are
items = self._squash_items(items, loop_var, task_vars)
no_log = False
items_len = len(items)
for item_index, item in enumerate(items):
task_vars['ansible_loop_var'] = loop_var
task_vars[loop_var] = item
if index_var:
task_vars['ansible_index_var'] = index_var
task_vars[index_var] = item_index
if extended:
task_vars['ansible_loop'] = {
'allitems': items,
'index': item_index + 1,
'index0': item_index,
'first': item_index == 0,
'last': item_index + 1 == items_len,
'length': items_len,
'revindex': items_len - item_index,
'revindex0': items_len - item_index - 1,
}
try:
task_vars['ansible_loop']['nextitem'] = items[item_index + 1]
except IndexError:
pass
if item_index - 1 >= 0:
task_vars['ansible_loop']['previtem'] = items[item_index - 1]
# Update template vars to reflect current loop iteration
templar.available_variables = task_vars
# pause between loop iterations
if loop_pause and ran_once:
try:
time.sleep(float(loop_pause))
except ValueError as e:
raise AnsibleError('Invalid pause value: %s, produced error: %s' % (loop_pause, to_native(e)))
else:
ran_once = True
try:
tmp_task = self._task.copy(exclude_parent=True, exclude_tasks=True)
tmp_task._parent = self._task._parent
tmp_play_context = self._play_context.copy()
except AnsibleParserError as e:
results.append(dict(failed=True, msg=to_text(e)))
continue
# now we swap the internal task and play context with their copies,
# execute, and swap them back so we can do the next iteration cleanly
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
res = self._execute(variables=task_vars)
task_fields = self._task.dump_attrs()
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
# update 'general no_log' based on specific no_log
no_log = no_log or tmp_task.no_log
# now update the result with the item info, and append the result
# to the list of results
res[loop_var] = item
res['ansible_loop_var'] = loop_var
if index_var:
res[index_var] = item_index
res['ansible_index_var'] = index_var
if extended:
res['ansible_loop'] = task_vars['ansible_loop']
res['_ansible_item_result'] = True
res['_ansible_ignore_errors'] = task_fields.get('ignore_errors')
# gets templated here unlike rest of loop_control fields, depends on loop_var above
try:
res['_ansible_item_label'] = templar.template(label, cache=False)
except AnsibleUndefinedVariable as e:
res.update({
'failed': True,
'msg': 'Failed to template loop_control.label: %s' % to_text(e)
})
self._final_q.put(
TaskResult(
self._host.name,
self._task._uuid,
res,
task_fields=task_fields,
),
block=False,
)
results.append(res)
del task_vars[loop_var]
# clear 'connection related' plugin variables for next iteration
if self._connection:
clear_plugins = {
'connection': self._connection._load_name,
'shell': self._connection._shell._load_name
}
if self._connection.become:
clear_plugins['become'] = self._connection.become._load_name
for plugin_type, plugin_name in iteritems(clear_plugins):
for var in C.config.get_plugin_vars(plugin_type, plugin_name):
if var in task_vars and var not in self._job_vars:
del task_vars[var]
self._task.no_log = no_log
return results
def _squash_items(self, items, loop_var, variables):
'''
Squash items down to a comma-separated list for certain modules which support it
(typically package management modules).
'''
name = None
try:
# _task.action could contain templatable strings (via action: and
# local_action:) Template it before comparing. If we don't end up
# optimizing it here, the templatable string might use template vars
# that aren't available until later (it could even use vars from the
# with_items loop) so don't make the templated string permanent yet.
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=variables)
task_action = self._task.action
if templar.is_template(task_action):
task_action = templar.template(task_action, fail_on_undefined=False)
if len(items) > 0 and task_action in self.SQUASH_ACTIONS:
if all(isinstance(o, string_types) for o in items):
final_items = []
found = None
for allowed in ['name', 'pkg', 'package']:
name = self._task.args.pop(allowed, None)
if name is not None:
found = allowed
break
# This gets the information to check whether the name field
# contains a template that we can squash for
template_no_item = template_with_item = None
if name:
if templar.is_template(name):
variables[loop_var] = '\0$'
template_no_item = templar.template(name, variables, cache=False)
variables[loop_var] = '\0@'
template_with_item = templar.template(name, variables, cache=False)
del variables[loop_var]
# Check if the user is doing some operation that doesn't take
# name/pkg or the name/pkg field doesn't have any variables
# and thus the items can't be squashed
if template_no_item != template_with_item:
if self._task.loop_with and self._task.loop_with not in ('items', 'list'):
value_text = "\"{{ query('%s', %r) }}\"" % (self._task.loop_with, self._task.loop)
else:
value_text = '%r' % self._task.loop
# Without knowing the data structure well, it's easiest to strip python2 unicode
# literals after stringifying
value_text = re.sub(r"\bu'", "'", value_text)
display.deprecated(
'Invoking "%s" only once while using a loop via squash_actions is deprecated. '
'Instead of using a loop to supply multiple items and specifying `%s: "%s"`, '
'please use `%s: %s` and remove the loop' % (self._task.action, found, name, found, value_text),
version='2.11'
)
for item in items:
variables[loop_var] = item
if self._task.evaluate_conditional(templar, variables):
new_item = templar.template(name, cache=False)
final_items.append(new_item)
self._task.args['name'] = final_items
# Wrap this in a list so that the calling function loop
# executes exactly once
return [final_items]
else:
# Restore the name parameter
self._task.args['name'] = name
# elif:
# Right now we only optimize single entries. In the future we
# could optimize more types:
# * lists can be squashed together
# * dicts could squash entries that match in all cases except the
# name or pkg field.
except Exception:
# Squashing is an optimization. If it fails for any reason,
# simply use the unoptimized list of items.
# Restore the name parameter
if name is not None:
self._task.args['name'] = name
return items
def _execute(self, variables=None):
'''
The primary workhorse of the executor system, this runs the task
on the specified host (which may be the delegated_to host) and handles
the retry/until and block rescue/always execution
'''
if variables is None:
variables = self._job_vars
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=variables)
context_validation_error = None
try:
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
self._play_context = self._play_context.set_task_and_variable_override(task=self._task, variables=variables, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
self._play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not self._play_context.remote_addr:
self._play_context.remote_addr = self._host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist.
self._play_context.update_vars(variables)
# FIXME: update connection/shell plugin options
except AnsibleError as e:
# save the error, which we'll raise later if we don't end up
# skipping this task during the conditional evaluation step
context_validation_error = e
# Evaluate the conditional (if any) for this task, which we do before running
# the final task post-validation. We do this before the post validation due to
# the fact that the conditional may specify that the task be skipped due to a
# variable not being present which would otherwise cause validation to fail
try:
if not self._task.evaluate_conditional(templar, variables):
display.debug("when evaluation is False, skipping this task")
return dict(changed=False, skipped=True, skip_reason='Conditional result was False', _ansible_no_log=self._play_context.no_log)
except AnsibleError:
# loop error takes precedence
if self._loop_eval_error is not None:
raise self._loop_eval_error # pylint: disable=raising-bad-type
raise
# Not skipping, if we had loop error raised earlier we need to raise it now to halt the execution of this task
if self._loop_eval_error is not None:
raise self._loop_eval_error # pylint: disable=raising-bad-type
# if we ran into an error while setting up the PlayContext, raise it now
if context_validation_error is not None:
raise context_validation_error # pylint: disable=raising-bad-type
# if this task is a TaskInclude, we just return now with a success code so the
# main thread can expand the task list for the given host
if self._task.action in ('include', 'include_tasks'):
include_args = self._task.args.copy()
include_file = include_args.pop('_raw_params', None)
if not include_file:
return dict(failed=True, msg="No include file was specified to the include")
include_file = templar.template(include_file)
return dict(include=include_file, include_args=include_args)
# if this task is a IncludeRole, we just return now with a success code so the main thread can expand the task list for the given host
elif self._task.action == 'include_role':
include_args = self._task.args.copy()
return dict(include_args=include_args)
# Now we do final validation on the task, which sets all fields to their final values.
self._task.post_validate(templar=templar)
if '_variable_params' in self._task.args:
variable_params = self._task.args.pop('_variable_params')
if isinstance(variable_params, dict):
if C.INJECT_FACTS_AS_VARS:
display.warning("Using a variable for a task's 'args' is unsafe in some situations "
"(see https://docs.ansible.com/ansible/devel/reference_appendices/faq.html#argsplat-unsafe)")
variable_params.update(self._task.args)
self._task.args = variable_params
# get the connection and the handler for this execution
if (not self._connection or
not getattr(self._connection, 'connected', False) or
self._play_context.remote_addr != self._connection._play_context.remote_addr):
self._connection = self._get_connection(variables=variables, templar=templar)
else:
# if connection is reused, its _play_context is no longer valid and needs
# to be replaced with the one templated above, in case other data changed
self._connection._play_context = self._play_context
if self._task.delegate_to:
# use vars from delegated host (which already include task vars) instead of original host
delegated_vars = variables.get('ansible_delegated_vars', {}).get(self._task.delegate_to, {})
orig_vars = templar.available_variables
templar.available_variables = delegated_vars
plugin_vars = self._set_connection_options(delegated_vars, templar)
templar.available_variables = orig_vars
else:
# just use normal host vars
plugin_vars = self._set_connection_options(variables, templar)
# get handler
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
# Apply default params for action/module, if present
self._task.args = get_action_args_with_defaults(self._task.action, self._task.args, self._task.module_defaults, templar)
# And filter out any fields which were set to default(omit), and got the omit token value
omit_token = variables.get('omit')
if omit_token is not None:
self._task.args = remove_omit(self._task.args, omit_token)
# Read some values from the task, so that we can modify them if need be
if self._task.until:
retries = self._task.retries
if retries is None:
retries = 3
elif retries <= 0:
retries = 1
else:
retries += 1
else:
retries = 1
delay = self._task.delay
if delay < 0:
delay = 1
# make a copy of the job vars here, in case we need to update them
# with the registered variable value later on when testing conditions
vars_copy = variables.copy()
display.debug("starting attempt loop")
result = None
for attempt in xrange(1, retries + 1):
display.debug("running the handler")
try:
if self._task.timeout:
old_sig = signal.signal(signal.SIGALRM, task_timeout)
signal.alarm(self._task.timeout)
result = self._handler.run(task_vars=variables)
except AnsibleActionSkip as e:
return dict(skipped=True, msg=to_text(e))
except AnsibleActionFail as e:
return dict(failed=True, msg=to_text(e))
except AnsibleConnectionFailure as e:
return dict(unreachable=True, msg=to_text(e))
except TaskTimeoutError as e:
msg = 'The %s action failed to execute in the expected time frame (%d) and was terminated' % (self._task.action, self._task.timeout)
return dict(failed=True, msg=msg)
finally:
if self._task.timeout:
signal.alarm(0)
old_sig = signal.signal(signal.SIGALRM, old_sig)
self._handler.cleanup()
display.debug("handler run complete")
# preserve no log
result["_ansible_no_log"] = self._play_context.no_log
# update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
if self._task.register:
if not isidentifier(self._task.register):
raise AnsibleError("Invalid variable name in 'register' specified: '%s'" % self._task.register)
vars_copy[self._task.register] = result = wrap_var(result)
if self._task.async_val > 0:
if self._task.poll > 0 and not result.get('skipped') and not result.get('failed'):
result = self._poll_async_result(result=result, templar=templar, task_vars=vars_copy)
# FIXME callback 'v2_runner_on_async_poll' here
# ensure no log is preserved
result["_ansible_no_log"] = self._play_context.no_log
# helper methods for use below in evaluating changed/failed_when
def _evaluate_changed_when_result(result):
if self._task.changed_when is not None and self._task.changed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.changed_when
result['changed'] = cond.evaluate_conditional(templar, vars_copy)
def _evaluate_failed_when_result(result):
if self._task.failed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.failed_when
failed_when_result = cond.evaluate_conditional(templar, vars_copy)
result['failed_when_result'] = result['failed'] = failed_when_result
else:
failed_when_result = False
return failed_when_result
if 'ansible_facts' in result:
if self._task.action in ('set_fact', 'include_vars'):
vars_copy.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
vars_copy.update(namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
vars_copy.update(clean_facts(af))
# set the failed property if it was missing.
if 'failed' not in result:
# rc is here for backwards compatibility and modules that use it instead of 'failed'
if 'rc' in result and result['rc'] not in [0, "0"]:
result['failed'] = True
else:
result['failed'] = False
# Make attempts and retries available early to allow their use in changed/failed_when
if self._task.until:
result['attempts'] = attempt
# set the changed property if it was missing.
if 'changed' not in result:
result['changed'] = False
# re-update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
# This gives changed/failed_when access to additional recently modified
# attributes of result
if self._task.register:
vars_copy[self._task.register] = result = wrap_var(result)
# if we didn't skip this task, use the helpers to evaluate the changed/
# failed_when properties
if 'skipped' not in result:
_evaluate_changed_when_result(result)
_evaluate_failed_when_result(result)
if retries > 1:
cond = Conditional(loader=self._loader)
cond.when = self._task.until
if cond.evaluate_conditional(templar, vars_copy):
break
else:
# no conditional check, or it failed, so sleep for the specified time
if attempt < retries:
result['_ansible_retry'] = True
result['retries'] = retries
display.debug('Retrying task, attempt %d of %d' % (attempt, retries))
self._final_q.put(TaskResult(self._host.name, self._task._uuid, result, task_fields=self._task.dump_attrs()), block=False)
time.sleep(delay)
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
else:
if retries > 1:
# we ran out of attempts, so mark the result as failed
result['attempts'] = retries - 1
result['failed'] = True
# do the final update of the local variables here, for both registered
# values and any facts which may have been created
if self._task.register:
variables[self._task.register] = result = wrap_var(result)
if 'ansible_facts' in result:
if self._task.action in ('set_fact', 'include_vars'):
variables.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
variables.update(namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
variables.update(clean_facts(af))
# save the notification target in the result, if it was specified, as
# this task may be running in a loop in which case the notification
# may be item-specific, ie. "notify: service {{item}}"
if self._task.notify is not None:
result['_ansible_notify'] = self._task.notify
# add the delegated vars to the result, so we can reference them
# on the results side without having to do any further templating
if self._task.delegate_to:
result["_ansible_delegated_vars"] = {'ansible_delegated_host': self._task.delegate_to}
for k in plugin_vars:
result["_ansible_delegated_vars"][k] = delegated_vars.get(k)
# and return
display.debug("attempt loop complete, returning result")
return result
def _poll_async_result(self, result, templar, task_vars=None):
'''
Polls for the specified JID to be complete
'''
if task_vars is None:
task_vars = self._job_vars
async_jid = result.get('ansible_job_id')
if async_jid is None:
return dict(failed=True, msg="No job id was returned by the async task")
# Create a new pseudo-task to run the async_status module, and run
# that (with a sleep for "poll" seconds between each retry) until the
# async time limit is exceeded.
async_task = Task().load(dict(action='async_status jid=%s' % async_jid, environment=self._task.environment))
# FIXME: this is no longer the case, normal takes care of all, see if this can just be generalized
# Because this is an async task, the action handler is async. However,
# we need the 'normal' action handler for the status check, so get it
# now via the action_loader
async_handler = self._shared_loader_obj.action_loader.get(
'async_status',
task=async_task,
connection=self._connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
)
time_left = self._task.async_val
while time_left > 0:
time.sleep(self._task.poll)
try:
async_result = async_handler.run(task_vars=task_vars)
# We do not bail out of the loop in cases where the failure
# is associated with a parsing error. The async_runner can
# have issues which result in a half-written/unparseable result
# file on disk, which manifests to the user as a timeout happening
# before it's time to timeout.
if (int(async_result.get('finished', 0)) == 1 or
('failed' in async_result and async_result.get('_ansible_parsed', False)) or
'skipped' in async_result):
break
except Exception as e:
# Connections can raise exceptions during polling (eg, network bounce, reboot); these should be non-fatal.
# On an exception, call the connection's reset method if it has one
# (eg, drop/recreate WinRM connection; some reused connections are in a broken state)
display.vvvv("Exception during async poll, retrying... (%s)" % to_text(e))
display.debug("Async poll exception was:\n%s" % to_text(traceback.format_exc()))
try:
async_handler._connection.reset()
except AttributeError:
pass
# Little hack to raise the exception if we've exhausted the timeout period
time_left -= self._task.poll
if time_left <= 0:
raise
else:
time_left -= self._task.poll
if int(async_result.get('finished', 0)) != 1:
if async_result.get('_ansible_parsed'):
return dict(failed=True, msg="async task did not complete within the requested time - %ss" % self._task.async_val)
else:
return dict(failed=True, msg="async task produced unparseable results", async_result=async_result)
else:
async_handler.cleanup(force=True)
return async_result
def _get_become(self, name):
become = become_loader.get(name)
if not become:
raise AnsibleError("Invalid become method specified, could not find matching plugin: '%s'. "
"Use `ansible-doc -t become -l` to list available plugins." % name)
return become
def _get_connection(self, variables, templar):
'''
Reads the connection property for the host, and returns the
correct connection object from the list of connection plugins
'''
if self._task.delegate_to is not None:
cvars = variables.get('ansible_delegated_vars', {}).get(self._task.delegate_to, {})
else:
cvars = variables
# use magic var if it exists, if not, let task inheritance do it's thing.
self._play_context.connection = cvars.get('ansible_connection', self._task.connection)
# TODO: play context has logic to update the conneciton for 'smart'
# (default value, will chose between ssh and paramiko) and 'persistent'
# (really paramiko), evnentually this should move to task object itself.
connection_name = self._play_context.connection
# load connection
conn_type = connection_name
connection = self._shared_loader_obj.connection_loader.get(
conn_type,
self._play_context,
self._new_stdin,
task_uuid=self._task._uuid,
ansible_playbook_pid=to_text(os.getppid())
)
if not connection:
raise AnsibleError("the connection plugin '%s' was not found" % conn_type)
# load become plugin if needed
if cvars.get('ansible_become', self._task.become):
become_plugin = self._get_become(cvars.get('ansible_become_method', self._task.become_method))
try:
connection.set_become_plugin(become_plugin)
except AttributeError:
# Older connection plugin that does not support set_become_plugin
pass
if getattr(connection.become, 'require_tty', False) and not getattr(connection, 'has_tty', False):
raise AnsibleError(
"The '%s' connection does not provide a TTY which is required for the selected "
"become plugin: %s." % (conn_type, become_plugin.name)
)
# Backwards compat for connection plugins that don't support become plugins
# Just do this unconditionally for now, we could move it inside of the
# AttributeError above later
self._play_context.set_become_plugin(become_plugin.name)
# Also backwards compat call for those still using play_context
self._play_context.set_attributes_from_plugin(connection)
if any(((connection.supports_persistence and C.USE_PERSISTENT_CONNECTIONS), connection.force_persistence)):
self._play_context.timeout = connection.get_option('persistent_command_timeout')
display.vvvv('attempting to start connection', host=self._play_context.remote_addr)
display.vvvv('using connection plugin %s' % connection.transport, host=self._play_context.remote_addr)
options = self._get_persistent_connection_options(connection, variables, templar)
socket_path = start_connection(self._play_context, options, self._task._uuid)
display.vvvv('local domain socket path is %s' % socket_path, host=self._play_context.remote_addr)
setattr(connection, '_socket_path', socket_path)
return connection
def _get_persistent_connection_options(self, connection, variables, templar):
final_vars = combine_vars(variables, variables.get('ansible_delegated_vars', dict()).get(self._task.delegate_to, dict()))
option_vars = C.config.get_plugin_vars('connection', connection._load_name)
plugin = connection._sub_plugin
if plugin.get('type'):
option_vars.extend(C.config.get_plugin_vars(plugin['type'], plugin['name']))
options = {}
for k in option_vars:
if k in final_vars:
options[k] = templar.template(final_vars[k])
return options
def _set_plugin_options(self, plugin_type, variables, templar, task_keys):
try:
plugin = getattr(self._connection, '_%s' % plugin_type)
except AttributeError:
# Some plugins are assigned to private attrs, ``become`` is not
plugin = getattr(self._connection, plugin_type)
option_vars = C.config.get_plugin_vars(plugin_type, plugin._load_name)
options = {}
for k in option_vars:
if k in variables:
options[k] = templar.template(variables[k])
# TODO move to task method?
plugin.set_options(task_keys=task_keys, var_options=options)
return option_vars
def _set_connection_options(self, variables, templar):
# keep list of variable names possibly consumed
varnames = []
# grab list of usable vars for this plugin
option_vars = C.config.get_plugin_vars('connection', self._connection._load_name)
varnames.extend(option_vars)
# create dict of 'templated vars'
options = {'_extras': {}}
for k in option_vars:
if k in variables:
options[k] = templar.template(variables[k])
# add extras if plugin supports them
if getattr(self._connection, 'allow_extras', False):
for k in variables:
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
options['_extras'][k] = templar.template(variables[k])
task_keys = self._task.dump_attrs()
if self._play_context.password:
# The connection password is threaded through the play_context for
# now. This is something we ultimately want to avoid, but the first
# step is to get connection plugins pulling the password through the
# config system instead of directly accessing play_context.
task_keys['password'] = self._play_context.password
# set options with 'templated vars' specific to this plugin and dependent ones
self._connection.set_options(task_keys=task_keys, var_options=options)
varnames.extend(self._set_plugin_options('shell', variables, templar, task_keys))
if self._connection.become is not None:
if self._play_context.become_pass:
# FIXME: eventually remove from task and play_context, here for backwards compat
# keep out of play objects to avoid accidental disclosure, only become plugin should have
# The become pass is already in the play_context if given on
# the CLI (-K). Make the plugin aware of it in this case.
task_keys['become_pass'] = self._play_context.become_pass
varnames.extend(self._set_plugin_options('become', variables, templar, task_keys))
# FOR BACKWARDS COMPAT:
for option in ('become_user', 'become_flags', 'become_exe', 'become_pass'):
try:
setattr(self._play_context, option, self._connection.become.get_option(option))
except KeyError:
pass # some plugins don't support all base flags
self._play_context.prompt = self._connection.become.prompt
return varnames
def _get_action_handler(self, connection, templar):
'''
Returns the correct action plugin to handle the requestion task action
'''
module_collection, separator, module_name = self._task.action.rpartition(".")
module_prefix = module_name.split('_')[0]
if module_collection:
# For network modules, which look for one action plugin per platform, look for the
# action plugin in the same collection as the module by prefixing the action plugin
# with the same collection.
network_action = "{0}.{1}".format(module_collection, module_prefix)
else:
network_action = module_prefix
collections = self._task.collections
# let action plugin override module, fallback to 'normal' action plugin otherwise
if self._shared_loader_obj.action_loader.has_plugin(self._task.action, collection_list=collections):
handler_name = self._task.action
elif all((module_prefix in C.NETWORK_GROUP_MODULES, self._shared_loader_obj.action_loader.has_plugin(network_action, collection_list=collections))):
handler_name = network_action
else:
# FUTURE: once we're comfortable with collections impl, preface this action with ansible.builtin so it can't be hijacked
handler_name = 'normal'
collections = None # until then, we don't want the task's collection list to be consulted; use the builtin
handler = self._shared_loader_obj.action_loader.get(
handler_name,
task=self._task,
connection=connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
collection_list=collections
)
if not handler:
raise AnsibleError("the handler '%s' was not found" % handler_name)
return handler
def start_connection(play_context, variables, task_uuid):
'''
Starts the persistent connection
'''
candidate_paths = [C.ANSIBLE_CONNECTION_PATH or os.path.dirname(sys.argv[0])]
candidate_paths.extend(os.environ['PATH'].split(os.pathsep))
for dirname in candidate_paths:
ansible_connection = os.path.join(dirname, 'ansible-connection')
if os.path.isfile(ansible_connection):
display.vvvv("Found ansible-connection at path {0}".format(ansible_connection))
break
else:
raise AnsibleError("Unable to find location of 'ansible-connection'. "
"Please set or check the value of ANSIBLE_CONNECTION_PATH")
env = os.environ.copy()
env.update({
# HACK; most of these paths may change during the controller's lifetime
# (eg, due to late dynamic role includes, multi-playbook execution), without a way
# to invalidate/update, ansible-connection won't always see the same plugins the controller
# can.
'ANSIBLE_BECOME_PLUGINS': become_loader.print_paths(),
'ANSIBLE_CLICONF_PLUGINS': cliconf_loader.print_paths(),
'ANSIBLE_COLLECTIONS_PATHS': to_native(os.pathsep.join(AnsibleCollectionConfig.collection_paths)),
'ANSIBLE_CONNECTION_PLUGINS': connection_loader.print_paths(),
'ANSIBLE_HTTPAPI_PLUGINS': httpapi_loader.print_paths(),
'ANSIBLE_NETCONF_PLUGINS': netconf_loader.print_paths(),
'ANSIBLE_TERMINAL_PLUGINS': terminal_loader.print_paths(),
})
python = sys.executable
master, slave = pty.openpty()
p = subprocess.Popen(
[python, ansible_connection, to_text(os.getppid()), to_text(task_uuid)],
stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env
)
os.close(slave)
# We need to set the pty into noncanonical mode. This ensures that we
# can receive lines longer than 4095 characters (plus newline) without
# truncating.
old = termios.tcgetattr(master)
new = termios.tcgetattr(master)
new[3] = new[3] & ~termios.ICANON
try:
termios.tcsetattr(master, termios.TCSANOW, new)
write_to_file_descriptor(master, variables)
write_to_file_descriptor(master, play_context.serialize())
(stdout, stderr) = p.communicate()
finally:
termios.tcsetattr(master, termios.TCSANOW, old)
os.close(master)
if p.returncode == 0:
result = json.loads(to_text(stdout, errors='surrogate_then_replace'))
else:
try:
result = json.loads(to_text(stderr, errors='surrogate_then_replace'))
except getattr(json.decoder, 'JSONDecodeError', ValueError):
# JSONDecodeError only available on Python 3.5+
result = {'error': to_text(stderr, errors='surrogate_then_replace')}
if 'messages' in result:
for level, message in result['messages']:
if level == 'log':
display.display(message, log_only=True)
elif level in ('debug', 'v', 'vv', 'vvv', 'vvvv', 'vvvvv', 'vvvvvv'):
getattr(display, level)(message, host=play_context.remote_addr)
else:
if hasattr(display, level):
getattr(display, level)(message)
else:
display.vvvv(message, host=play_context.remote_addr)
if 'error' in result:
if play_context.verbosity > 2:
if result.get('exception'):
msg = "The full traceback is:\n" + result['exception']
display.display(msg, color=C.COLOR_ERROR)
raise AnsibleError(result['error'])
return result['socket_path']
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,238 |
ansible-test ansible-doc sanity check should also check that --json works
|
##### SUMMARY
That would have prevented #69031.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ansible-test
|
https://github.com/ansible/ansible/issues/69238
|
https://github.com/ansible/ansible/pull/69288
|
4794b98f2a9667fc6e964bcb6a36677f6de04475
|
0b82d4499e2e0076f9efd6d360d4389af0aa2921
| 2020-04-29T15:46:39Z |
python
| 2020-05-29T15:52:29Z |
changelogs/fragments/69288-ansible-test-ansible-doc-json.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,238 |
ansible-test ansible-doc sanity check should also check that --json works
|
##### SUMMARY
That would have prevented #69031.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ansible-test
|
https://github.com/ansible/ansible/issues/69238
|
https://github.com/ansible/ansible/pull/69288
|
4794b98f2a9667fc6e964bcb6a36677f6de04475
|
0b82d4499e2e0076f9efd6d360d4389af0aa2921
| 2020-04-29T15:46:39Z |
python
| 2020-05-29T15:52:29Z |
test/lib/ansible_test/_internal/sanity/ansible_doc.py
|
"""Sanity test for ansible-doc."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import collections
import os
import re
from .. import types as t
from ..sanity import (
SanitySingleVersion,
SanityFailure,
SanitySuccess,
)
from ..target import (
TestTarget,
)
from ..util import (
SubprocessError,
display,
is_subdir,
)
from ..util_common import (
intercept_command,
)
from ..ansible_util import (
ansible_environment,
)
from ..config import (
SanityConfig,
)
from ..data import (
data_context,
)
from ..coverage_util import (
coverage_context,
)
class AnsibleDocTest(SanitySingleVersion):
"""Sanity test for ansible-doc."""
def filter_targets(self, targets): # type: (t.List[TestTarget]) -> t.List[TestTarget]
"""Return the given list of test targets, filtered to include only those relevant for the test."""
# This should use documentable plugins from constants instead
plugin_type_blacklist = set([
# not supported by ansible-doc
'action',
'doc_fragments',
'filter',
'module_utils',
'netconf',
'terminal',
'test',
])
plugin_paths = [plugin_path for plugin_type, plugin_path in data_context().content.plugin_paths.items() if plugin_type not in plugin_type_blacklist]
return [target for target in targets
if os.path.splitext(target.path)[1] == '.py'
and os.path.basename(target.path) != '__init__.py'
and any(is_subdir(target.path, path) for path in plugin_paths)
]
def test(self, args, targets, python_version):
"""
:type args: SanityConfig
:type targets: SanityTargets
:type python_version: str
:rtype: TestResult
"""
settings = self.load_processor(args)
paths = [target.path for target in targets.include]
doc_targets = collections.defaultdict(list)
target_paths = collections.defaultdict(dict)
remap_types = dict(
modules='module',
)
for plugin_type, plugin_path in data_context().content.plugin_paths.items():
plugin_type = remap_types.get(plugin_type, plugin_type)
for plugin_file_path in [target.name for target in targets.include if is_subdir(target.path, plugin_path)]:
plugin_name = os.path.splitext(os.path.basename(plugin_file_path))[0]
if plugin_name.startswith('_'):
plugin_name = plugin_name[1:]
doc_targets[plugin_type].append(data_context().content.prefix + plugin_name)
target_paths[plugin_type][data_context().content.prefix + plugin_name] = plugin_file_path
env = ansible_environment(args, color=False)
error_messages = []
for doc_type in sorted(doc_targets):
cmd = ['ansible-doc', '-t', doc_type] + sorted(doc_targets[doc_type])
try:
with coverage_context(args):
stdout, stderr = intercept_command(args, cmd, target_name='ansible-doc', env=env, capture=True, python_version=python_version)
status = 0
except SubprocessError as ex:
stdout = ex.stdout
stderr = ex.stderr
status = ex.status
if status:
summary = u'%s' % SubprocessError(cmd=cmd, status=status, stderr=stderr)
return SanityFailure(self.name, summary=summary)
if stdout:
display.info(stdout.strip(), verbosity=3)
if stderr:
# ignore removed module/plugin warnings
stderr = re.sub(r'\[WARNING\]: [^ ]+ [^ ]+ has been removed\n', '', stderr).strip()
if stderr:
summary = u'Output on stderr from ansible-doc is considered an error.\n\n%s' % SubprocessError(cmd, stderr=stderr)
return SanityFailure(self.name, summary=summary)
if args.explain:
return SanitySuccess(self.name)
error_messages = settings.process_errors(error_messages, paths)
if error_messages:
return SanityFailure(self.name, messages=error_messages)
return SanitySuccess(self.name)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,617 |
Ansible does not work when the working directory contains some non-ASCII characters
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
As of Ansible 2.9, the Ansible CLI commands crashe when the ansible project directory contains some non-ASCII characters. In my case it contains a `é`, a quite common character in french.
I get this error:
> ERROR! Unexpected Exception, this is probably a bug: 'ascii' codec can't decode byte 0xc3 in position 31: ordinal not in range(128)
> to see the full traceback, use -vvv
With `---v`, I get that:
> ERROR! Unexpected Exception, this is probably a bug: 'ascii' codec can't decode byte 0xc3 in position 31: ordinal not in range(128)
> the full traceback was:
>
> Traceback (most recent call last):
> File "/usr/bin/ansible-playbook", line 123, in <module>
> exit_code = cli.run()
> File "/usr/lib/python2.7/dist-packages/ansible/cli/playbook.py", line 69, in run
> super(PlaybookCLI, self).run()
> File "/usr/lib/python2.7/dist-packages/ansible/cli/__init__.py", line 82, in run
> display.vv(to_text(opt_help.version(self.parser.prog)))
> File "/usr/lib/python2.7/dist-packages/ansible/cli/arguments/option_helpers.py", line 174, in version
> result += "\n config file = %s" % C.CONFIG_FILE
> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 31: ordinal not in range(128)
If I remove this character from the directory path, it gets working.
Note that this also happens for other Ansible commands, such as `ansible --version`. It sounds like it happens as soon as the project configuration is read.
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
Ansible.
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.3
config file = /tmp/project/ansible.cfg
configured module search path = [u'/home/username/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.17 (default, Nov 7 2019, 10:07:09) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ANSIBLE_PIPELINING(/tmp/project/ansible.cfg) = True
DEFAULT_HOST_LIST(/tmp/project/ansible.cfg) = [u'/tmp/project/hosts']
DEFAULT_REMOTE_USER(/tmp/project/ansible.cfg) = username
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ubuntu 18.04. Ansible installed from the [Ansible PPA](https://launchpad.net/~ansible).
##### STEPS TO REPRODUCE
Create any ansible project in a directory containing a non-ASCII character.
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Ansible should work and used to work until Ansible 2.9.
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
Get the error reported above.
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/66617
|
https://github.com/ansible/ansible/pull/66624
|
1dd8247fba3fc5e07b65e28bc4b45ddbaa9a93ba
|
3606dcfe652ab45a8c7e4dedd5e2a64edd820ef5
| 2020-01-20T08:32:13Z |
python
| 2020-05-29T18:42:44Z |
changelogs/fragments/66617-version-unicode-fix.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,617 |
Ansible does not work when the working directory contains some non-ASCII characters
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
As of Ansible 2.9, the Ansible CLI commands crashe when the ansible project directory contains some non-ASCII characters. In my case it contains a `é`, a quite common character in french.
I get this error:
> ERROR! Unexpected Exception, this is probably a bug: 'ascii' codec can't decode byte 0xc3 in position 31: ordinal not in range(128)
> to see the full traceback, use -vvv
With `---v`, I get that:
> ERROR! Unexpected Exception, this is probably a bug: 'ascii' codec can't decode byte 0xc3 in position 31: ordinal not in range(128)
> the full traceback was:
>
> Traceback (most recent call last):
> File "/usr/bin/ansible-playbook", line 123, in <module>
> exit_code = cli.run()
> File "/usr/lib/python2.7/dist-packages/ansible/cli/playbook.py", line 69, in run
> super(PlaybookCLI, self).run()
> File "/usr/lib/python2.7/dist-packages/ansible/cli/__init__.py", line 82, in run
> display.vv(to_text(opt_help.version(self.parser.prog)))
> File "/usr/lib/python2.7/dist-packages/ansible/cli/arguments/option_helpers.py", line 174, in version
> result += "\n config file = %s" % C.CONFIG_FILE
> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 31: ordinal not in range(128)
If I remove this character from the directory path, it gets working.
Note that this also happens for other Ansible commands, such as `ansible --version`. It sounds like it happens as soon as the project configuration is read.
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
Ansible.
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.3
config file = /tmp/project/ansible.cfg
configured module search path = [u'/home/username/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.17 (default, Nov 7 2019, 10:07:09) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ANSIBLE_PIPELINING(/tmp/project/ansible.cfg) = True
DEFAULT_HOST_LIST(/tmp/project/ansible.cfg) = [u'/tmp/project/hosts']
DEFAULT_REMOTE_USER(/tmp/project/ansible.cfg) = username
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ubuntu 18.04. Ansible installed from the [Ansible PPA](https://launchpad.net/~ansible).
##### STEPS TO REPRODUCE
Create any ansible project in a directory containing a non-ASCII character.
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Ansible should work and used to work until Ansible 2.9.
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
Get the error reported above.
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/66617
|
https://github.com/ansible/ansible/pull/66624
|
1dd8247fba3fc5e07b65e28bc4b45ddbaa9a93ba
|
3606dcfe652ab45a8c7e4dedd5e2a64edd820ef5
| 2020-01-20T08:32:13Z |
python
| 2020-05-29T18:42:44Z |
lib/ansible/config/manager.py
|
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import atexit
import io
import os
import os.path
import sys
import stat
import tempfile
import traceback
from collections import namedtuple
from yaml import load as yaml_load
try:
# use C version if possible for speedup
from yaml import CSafeLoader as SafeLoader
except ImportError:
from yaml import SafeLoader
from ansible.config.data import ConfigData
from ansible.errors import AnsibleOptionsError, AnsibleError
from ansible.module_utils._text import to_text, to_bytes, to_native
from ansible.module_utils.common._collections_compat import Sequence
from ansible.module_utils.six import PY3, string_types
from ansible.module_utils.six.moves import configparser
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.parsing.quoting import unquote
from ansible.parsing.yaml.objects import AnsibleVaultEncryptedUnicode
from ansible.utils import py3compat
from ansible.utils.path import cleanup_tmp_file, makedirs_safe, unfrackpath
Plugin = namedtuple('Plugin', 'name type')
Setting = namedtuple('Setting', 'name value origin type')
INTERNAL_DEFS = {'lookup': ('_terms',)}
def _get_entry(plugin_type, plugin_name, config):
''' construct entry for requested config '''
entry = ''
if plugin_type:
entry += 'plugin_type: %s ' % plugin_type
if plugin_name:
entry += 'plugin: %s ' % plugin_name
entry += 'setting: %s ' % config
return entry
# FIXME: see if we can unify in module_utils with similar function used by argspec
def ensure_type(value, value_type, origin=None):
''' return a configuration variable with casting
:arg value: The value to ensure correct typing of
:kwarg value_type: The type of the value. This can be any of the following strings:
:boolean: sets the value to a True or False value
:bool: Same as 'boolean'
:integer: Sets the value to an integer or raises a ValueType error
:int: Same as 'integer'
:float: Sets the value to a float or raises a ValueType error
:list: Treats the value as a comma separated list. Split the value
and return it as a python list.
:none: Sets the value to None
:path: Expands any environment variables and tilde's in the value.
:tmppath: Create a unique temporary directory inside of the directory
specified by value and return its path.
:temppath: Same as 'tmppath'
:tmp: Same as 'tmppath'
:pathlist: Treat the value as a typical PATH string. (On POSIX, this
means colon separated strings.) Split the value and then expand
each part for environment variables and tildes.
:pathspec: Treat the value as a PATH string. Expands any environment variables
tildes's in the value.
:str: Sets the value to string types.
:string: Same as 'str'
'''
errmsg = ''
basedir = None
if origin and os.path.isabs(origin) and os.path.exists(to_bytes(origin)):
basedir = origin
if value_type:
value_type = value_type.lower()
if value is not None:
if value_type in ('boolean', 'bool'):
value = boolean(value, strict=False)
elif value_type in ('integer', 'int'):
value = int(value)
elif value_type == 'float':
value = float(value)
elif value_type == 'list':
if isinstance(value, string_types):
value = [x.strip() for x in value.split(',')]
elif not isinstance(value, Sequence):
errmsg = 'list'
elif value_type == 'none':
if value == "None":
value = None
if value is not None:
errmsg = 'None'
elif value_type == 'path':
if isinstance(value, string_types):
value = resolve_path(value, basedir=basedir)
else:
errmsg = 'path'
elif value_type in ('tmp', 'temppath', 'tmppath'):
if isinstance(value, string_types):
value = resolve_path(value, basedir=basedir)
if not os.path.exists(value):
makedirs_safe(value, 0o700)
prefix = 'ansible-local-%s' % os.getpid()
value = tempfile.mkdtemp(prefix=prefix, dir=value)
atexit.register(cleanup_tmp_file, value, warn=True)
else:
errmsg = 'temppath'
elif value_type == 'pathspec':
if isinstance(value, string_types):
value = value.split(os.pathsep)
if isinstance(value, Sequence):
value = [resolve_path(x, basedir=basedir) for x in value]
else:
errmsg = 'pathspec'
elif value_type == 'pathlist':
if isinstance(value, string_types):
value = [x.strip() for x in value.split(',')]
if isinstance(value, Sequence):
value = [resolve_path(x, basedir=basedir) for x in value]
else:
errmsg = 'pathlist'
elif value_type in ('str', 'string'):
if isinstance(value, (string_types, AnsibleVaultEncryptedUnicode)):
value = unquote(to_text(value, errors='surrogate_or_strict'))
else:
errmsg = 'string'
# defaults to string type
elif isinstance(value, (string_types, AnsibleVaultEncryptedUnicode)):
value = unquote(to_text(value, errors='surrogate_or_strict'))
if errmsg:
raise ValueError('Invalid type provided for "%s": %s' % (errmsg, to_native(value)))
return to_text(value, errors='surrogate_or_strict', nonstring='passthru')
# FIXME: see if this can live in utils/path
def resolve_path(path, basedir=None):
''' resolve relative or 'variable' paths '''
if '{{CWD}}' in path: # allow users to force CWD using 'magic' {{CWD}}
path = path.replace('{{CWD}}', os.getcwd())
return unfrackpath(path, follow=False, basedir=basedir)
# FIXME: generic file type?
def get_config_type(cfile):
ftype = None
if cfile is not None:
ext = os.path.splitext(cfile)[-1]
if ext in ('.ini', '.cfg'):
ftype = 'ini'
elif ext in ('.yaml', '.yml'):
ftype = 'yaml'
else:
raise AnsibleOptionsError("Unsupported configuration file extension for %s: %s" % (cfile, to_native(ext)))
return ftype
# FIXME: can move to module_utils for use for ini plugins also?
def get_ini_config_value(p, entry):
''' returns the value of last ini entry found '''
value = None
if p is not None:
try:
value = p.get(entry.get('section', 'defaults'), entry.get('key', ''), raw=True)
except Exception: # FIXME: actually report issues here
pass
return value
def find_ini_config_file(warnings=None):
''' Load INI Config File order(first found is used): ENV, CWD, HOME, /etc/ansible '''
# FIXME: eventually deprecate ini configs
if warnings is None:
# Note: In this case, warnings does nothing
warnings = set()
# A value that can never be a valid path so that we can tell if ANSIBLE_CONFIG was set later
# We can't use None because we could set path to None.
SENTINEL = object
potential_paths = []
# Environment setting
path_from_env = os.getenv("ANSIBLE_CONFIG", SENTINEL)
if path_from_env is not SENTINEL:
path_from_env = unfrackpath(path_from_env, follow=False)
if os.path.isdir(to_bytes(path_from_env)):
path_from_env = os.path.join(path_from_env, "ansible.cfg")
potential_paths.append(path_from_env)
# Current working directory
warn_cmd_public = False
try:
cwd = os.getcwd()
perms = os.stat(cwd)
cwd_cfg = os.path.join(cwd, "ansible.cfg")
if perms.st_mode & stat.S_IWOTH:
# Working directory is world writable so we'll skip it.
# Still have to look for a file here, though, so that we know if we have to warn
if os.path.exists(cwd_cfg):
warn_cmd_public = True
else:
potential_paths.append(cwd_cfg)
except OSError:
# If we can't access cwd, we'll simply skip it as a possible config source
pass
# Per user location
potential_paths.append(unfrackpath("~/.ansible.cfg", follow=False))
# System location
potential_paths.append("/etc/ansible/ansible.cfg")
for path in potential_paths:
b_path = to_bytes(path)
if os.path.exists(b_path) and os.access(b_path, os.R_OK):
break
else:
path = None
# Emit a warning if all the following are true:
# * We did not use a config from ANSIBLE_CONFIG
# * There's an ansible.cfg in the current working directory that we skipped
if path_from_env != path and warn_cmd_public:
warnings.add(u"Ansible is being run in a world writable directory (%s),"
u" ignoring it as an ansible.cfg source."
u" For more information see"
u" https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir"
% to_text(cwd))
return path
class ConfigManager(object):
DEPRECATED = []
WARNINGS = set()
def __init__(self, conf_file=None, defs_file=None):
self._base_defs = {}
self._plugins = {}
self._parsers = {}
self._config_file = conf_file
self.data = ConfigData()
self._base_defs = self._read_config_yaml_file(defs_file or ('%s/base.yml' % os.path.dirname(__file__)))
if self._config_file is None:
# set config using ini
self._config_file = find_ini_config_file(self.WARNINGS)
# consume configuration
if self._config_file:
# initialize parser and read config
self._parse_config_file()
# update constants
self.update_config_data()
try:
self.update_module_defaults_groups()
except Exception as e:
# Since this is a 2.7 preview feature, we want to have it fail as gracefully as possible when there are issues.
sys.stderr.write('Could not load module_defaults_groups: %s: %s\n\n' % (type(e).__name__, e))
self.module_defaults_groups = {}
def _read_config_yaml_file(self, yml_file):
# TODO: handle relative paths as relative to the directory containing the current playbook instead of CWD
# Currently this is only used with absolute paths to the `ansible/config` directory
yml_file = to_bytes(yml_file)
if os.path.exists(yml_file):
with open(yml_file, 'rb') as config_def:
return yaml_load(config_def, Loader=SafeLoader) or {}
raise AnsibleError(
"Missing base YAML definition file (bad install?): %s" % to_native(yml_file))
def _parse_config_file(self, cfile=None):
''' return flat configuration settings from file(s) '''
# TODO: take list of files with merge/nomerge
if cfile is None:
cfile = self._config_file
ftype = get_config_type(cfile)
if cfile is not None:
if ftype == 'ini':
self._parsers[cfile] = configparser.ConfigParser()
with open(to_bytes(cfile), 'rb') as f:
try:
cfg_text = to_text(f.read(), errors='surrogate_or_strict')
except UnicodeError as e:
raise AnsibleOptionsError("Error reading config file(%s) because the config file was not utf8 encoded: %s" % (cfile, to_native(e)))
try:
if PY3:
self._parsers[cfile].read_string(cfg_text)
else:
cfg_file = io.StringIO(cfg_text)
self._parsers[cfile].readfp(cfg_file)
except configparser.Error as e:
raise AnsibleOptionsError("Error reading config file (%s): %s" % (cfile, to_native(e)))
# FIXME: this should eventually handle yaml config files
# elif ftype == 'yaml':
# with open(cfile, 'rb') as config_stream:
# self._parsers[cfile] = yaml.safe_load(config_stream)
else:
raise AnsibleOptionsError("Unsupported configuration file type: %s" % to_native(ftype))
def _find_yaml_config_files(self):
''' Load YAML Config Files in order, check merge flags, keep origin of settings'''
pass
def get_plugin_options(self, plugin_type, name, keys=None, variables=None, direct=None):
options = {}
defs = self.get_configuration_definitions(plugin_type, name)
for option in defs:
options[option] = self.get_config_value(option, plugin_type=plugin_type, plugin_name=name, keys=keys, variables=variables, direct=direct)
return options
def get_plugin_vars(self, plugin_type, name):
pvars = []
for pdef in self.get_configuration_definitions(plugin_type, name).values():
if 'vars' in pdef and pdef['vars']:
for var_entry in pdef['vars']:
pvars.append(var_entry['name'])
return pvars
def get_configuration_definition(self, name, plugin_type=None, plugin_name=None):
ret = {}
if plugin_type is None:
ret = self._base_defs.get(name, None)
elif plugin_name is None:
ret = self._plugins.get(plugin_type, {}).get(name, None)
else:
ret = self._plugins.get(plugin_type, {}).get(plugin_name, {}).get(name, None)
return ret
def get_configuration_definitions(self, plugin_type=None, name=None):
''' just list the possible settings, either base or for specific plugins or plugin '''
ret = {}
if plugin_type is None:
ret = self._base_defs
elif name is None:
ret = self._plugins.get(plugin_type, {})
else:
ret = self._plugins.get(plugin_type, {}).get(name, {})
return ret
def _loop_entries(self, container, entry_list):
''' repeat code for value entry assignment '''
value = None
origin = None
for entry in entry_list:
name = entry.get('name')
try:
temp_value = container.get(name, None)
except UnicodeEncodeError:
self.WARNINGS.add(u'value for config entry {0} contains invalid characters, ignoring...'.format(to_text(name)))
continue
if temp_value is not None: # only set if entry is defined in container
# inline vault variables should be converted to a text string
if isinstance(temp_value, AnsibleVaultEncryptedUnicode):
temp_value = to_text(temp_value, errors='surrogate_or_strict')
value = temp_value
origin = name
# deal with deprecation of setting source, if used
if 'deprecated' in entry:
self.DEPRECATED.append((entry['name'], entry['deprecated']))
return value, origin
def get_config_value(self, config, cfile=None, plugin_type=None, plugin_name=None, keys=None, variables=None, direct=None):
''' wrapper '''
try:
value, _drop = self.get_config_value_and_origin(config, cfile=cfile, plugin_type=plugin_type, plugin_name=plugin_name,
keys=keys, variables=variables, direct=direct)
except AnsibleError:
raise
except Exception as e:
raise AnsibleError("Unhandled exception when retrieving %s:\n%s" % (config, to_native(e)), orig_exc=e)
return value
def get_config_value_and_origin(self, config, cfile=None, plugin_type=None, plugin_name=None, keys=None, variables=None, direct=None):
''' Given a config key figure out the actual value and report on the origin of the settings '''
if cfile is None:
# use default config
cfile = self._config_file
# Note: sources that are lists listed in low to high precedence (last one wins)
value = None
origin = None
defs = self.get_configuration_definitions(plugin_type, plugin_name)
if config in defs:
# direct setting via plugin arguments, can set to None so we bypass rest of processing/defaults
direct_aliases = []
if direct:
direct_aliases = [direct[alias] for alias in defs[config].get('aliases', []) if alias in direct]
if direct and config in direct:
value = direct[config]
origin = 'Direct'
elif direct and direct_aliases:
value = direct_aliases[0]
origin = 'Direct'
else:
# Use 'variable overrides' if present, highest precedence, but only present when querying running play
if variables and defs[config].get('vars'):
value, origin = self._loop_entries(variables, defs[config]['vars'])
origin = 'var: %s' % origin
# use playbook keywords if you have em
if value is None and keys and config in keys:
value, origin = keys[config], 'keyword'
origin = 'keyword: %s' % origin
# env vars are next precedence
if value is None and defs[config].get('env'):
value, origin = self._loop_entries(py3compat.environ, defs[config]['env'])
origin = 'env: %s' % origin
# try config file entries next, if we have one
if self._parsers.get(cfile, None) is None:
self._parse_config_file(cfile)
if value is None and cfile is not None:
ftype = get_config_type(cfile)
if ftype and defs[config].get(ftype):
if ftype == 'ini':
# load from ini config
try: # FIXME: generalize _loop_entries to allow for files also, most of this code is dupe
for ini_entry in defs[config]['ini']:
temp_value = get_ini_config_value(self._parsers[cfile], ini_entry)
if temp_value is not None:
value = temp_value
origin = cfile
if 'deprecated' in ini_entry:
self.DEPRECATED.append(('[%s]%s' % (ini_entry['section'], ini_entry['key']), ini_entry['deprecated']))
except Exception as e:
sys.stderr.write("Error while loading ini config %s: %s" % (cfile, to_native(e)))
elif ftype == 'yaml':
# FIXME: implement, also , break down key from defs (. notation???)
origin = cfile
# set default if we got here w/o a value
if value is None:
if defs[config].get('required', False):
if not plugin_type or config not in INTERNAL_DEFS.get(plugin_type, {}):
raise AnsibleError("No setting was provided for required configuration %s" %
to_native(_get_entry(plugin_type, plugin_name, config)))
else:
value = defs[config].get('default')
origin = 'default'
# skip typing as this is a templated default that will be resolved later in constants, which has needed vars
if plugin_type is None and isinstance(value, string_types) and (value.startswith('{{') and value.endswith('}}')):
return value, origin
# ensure correct type, can raise exceptions on mismatched types
try:
value = ensure_type(value, defs[config].get('type'), origin=origin)
except ValueError as e:
if origin.startswith('env:') and value == '':
# this is empty env var for non string so we can set to default
origin = 'default'
value = ensure_type(defs[config].get('default'), defs[config].get('type'), origin=origin)
else:
raise AnsibleOptionsError('Invalid type for configuration option %s: %s' %
(to_native(_get_entry(plugin_type, plugin_name, config)), to_native(e)))
# deal with deprecation of the setting
if 'deprecated' in defs[config] and origin != 'default':
self.DEPRECATED.append((config, defs[config].get('deprecated')))
else:
raise AnsibleError('Requested entry (%s) was not defined in configuration.' % to_native(_get_entry(plugin_type, plugin_name, config)))
return value, origin
def initialize_plugin_configuration_definitions(self, plugin_type, name, defs):
if plugin_type not in self._plugins:
self._plugins[plugin_type] = {}
self._plugins[plugin_type][name] = defs
def update_module_defaults_groups(self):
defaults_config = self._read_config_yaml_file(
'%s/module_defaults.yml' % os.path.join(os.path.dirname(__file__))
)
if defaults_config.get('version') not in ('1', '1.0', 1, 1.0):
raise AnsibleError('module_defaults.yml has an invalid version "%s" for configuration. Could be a bad install.' % defaults_config.get('version'))
self.module_defaults_groups = defaults_config.get('groupings', {})
def update_config_data(self, defs=None, configfile=None):
''' really: update constants '''
if defs is None:
defs = self._base_defs
if configfile is None:
configfile = self._config_file
if not isinstance(defs, dict):
raise AnsibleOptionsError("Invalid configuration definition type: %s for %s" % (type(defs), defs))
# update the constant for config file
self.data.update_setting(Setting('CONFIG_FILE', configfile, '', 'string'))
origin = None
# env and config defs can have several entries, ordered in list from lowest to highest precedence
for config in defs:
if not isinstance(defs[config], dict):
raise AnsibleOptionsError("Invalid configuration definition '%s': type is %s" % (to_native(config), type(defs[config])))
# get value and origin
try:
value, origin = self.get_config_value_and_origin(config, configfile)
except Exception as e:
# Printing the problem here because, in the current code:
# (1) we can't reach the error handler for AnsibleError before we
# hit a different error due to lack of working config.
# (2) We don't have access to display yet because display depends on config
# being properly loaded.
#
# If we start getting double errors printed from this section of code, then the
# above problem #1 has been fixed. Revamp this to be more like the try: except
# in get_config_value() at that time.
sys.stderr.write("Unhandled error:\n %s\n\n" % traceback.format_exc())
raise AnsibleError("Invalid settings supplied for %s: %s\n" % (config, to_native(e)), orig_exc=e)
# set the constant
self.data.update_setting(Setting(config, value, origin, defs[config].get('type', 'string')))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,617 |
Ansible does not work when the working directory contains some non-ASCII characters
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
As of Ansible 2.9, the Ansible CLI commands crashe when the ansible project directory contains some non-ASCII characters. In my case it contains a `é`, a quite common character in french.
I get this error:
> ERROR! Unexpected Exception, this is probably a bug: 'ascii' codec can't decode byte 0xc3 in position 31: ordinal not in range(128)
> to see the full traceback, use -vvv
With `---v`, I get that:
> ERROR! Unexpected Exception, this is probably a bug: 'ascii' codec can't decode byte 0xc3 in position 31: ordinal not in range(128)
> the full traceback was:
>
> Traceback (most recent call last):
> File "/usr/bin/ansible-playbook", line 123, in <module>
> exit_code = cli.run()
> File "/usr/lib/python2.7/dist-packages/ansible/cli/playbook.py", line 69, in run
> super(PlaybookCLI, self).run()
> File "/usr/lib/python2.7/dist-packages/ansible/cli/__init__.py", line 82, in run
> display.vv(to_text(opt_help.version(self.parser.prog)))
> File "/usr/lib/python2.7/dist-packages/ansible/cli/arguments/option_helpers.py", line 174, in version
> result += "\n config file = %s" % C.CONFIG_FILE
> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 31: ordinal not in range(128)
If I remove this character from the directory path, it gets working.
Note that this also happens for other Ansible commands, such as `ansible --version`. It sounds like it happens as soon as the project configuration is read.
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
Ansible.
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.3
config file = /tmp/project/ansible.cfg
configured module search path = [u'/home/username/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.17 (default, Nov 7 2019, 10:07:09) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ANSIBLE_PIPELINING(/tmp/project/ansible.cfg) = True
DEFAULT_HOST_LIST(/tmp/project/ansible.cfg) = [u'/tmp/project/hosts']
DEFAULT_REMOTE_USER(/tmp/project/ansible.cfg) = username
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ubuntu 18.04. Ansible installed from the [Ansible PPA](https://launchpad.net/~ansible).
##### STEPS TO REPRODUCE
Create any ansible project in a directory containing a non-ASCII character.
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Ansible should work and used to work until Ansible 2.9.
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
Get the error reported above.
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/66617
|
https://github.com/ansible/ansible/pull/66624
|
1dd8247fba3fc5e07b65e28bc4b45ddbaa9a93ba
|
3606dcfe652ab45a8c7e4dedd5e2a64edd820ef5
| 2020-01-20T08:32:13Z |
python
| 2020-05-29T18:42:44Z |
test/integration/targets/unicode/křížek-ansible-project/ansible.cfg
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,617 |
Ansible does not work when the working directory contains some non-ASCII characters
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
As of Ansible 2.9, the Ansible CLI commands crashe when the ansible project directory contains some non-ASCII characters. In my case it contains a `é`, a quite common character in french.
I get this error:
> ERROR! Unexpected Exception, this is probably a bug: 'ascii' codec can't decode byte 0xc3 in position 31: ordinal not in range(128)
> to see the full traceback, use -vvv
With `---v`, I get that:
> ERROR! Unexpected Exception, this is probably a bug: 'ascii' codec can't decode byte 0xc3 in position 31: ordinal not in range(128)
> the full traceback was:
>
> Traceback (most recent call last):
> File "/usr/bin/ansible-playbook", line 123, in <module>
> exit_code = cli.run()
> File "/usr/lib/python2.7/dist-packages/ansible/cli/playbook.py", line 69, in run
> super(PlaybookCLI, self).run()
> File "/usr/lib/python2.7/dist-packages/ansible/cli/__init__.py", line 82, in run
> display.vv(to_text(opt_help.version(self.parser.prog)))
> File "/usr/lib/python2.7/dist-packages/ansible/cli/arguments/option_helpers.py", line 174, in version
> result += "\n config file = %s" % C.CONFIG_FILE
> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 31: ordinal not in range(128)
If I remove this character from the directory path, it gets working.
Note that this also happens for other Ansible commands, such as `ansible --version`. It sounds like it happens as soon as the project configuration is read.
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
Ansible.
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.3
config file = /tmp/project/ansible.cfg
configured module search path = [u'/home/username/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.17 (default, Nov 7 2019, 10:07:09) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ANSIBLE_PIPELINING(/tmp/project/ansible.cfg) = True
DEFAULT_HOST_LIST(/tmp/project/ansible.cfg) = [u'/tmp/project/hosts']
DEFAULT_REMOTE_USER(/tmp/project/ansible.cfg) = username
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ubuntu 18.04. Ansible installed from the [Ansible PPA](https://launchpad.net/~ansible).
##### STEPS TO REPRODUCE
Create any ansible project in a directory containing a non-ASCII character.
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Ansible should work and used to work until Ansible 2.9.
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
Get the error reported above.
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/66617
|
https://github.com/ansible/ansible/pull/66624
|
1dd8247fba3fc5e07b65e28bc4b45ddbaa9a93ba
|
3606dcfe652ab45a8c7e4dedd5e2a64edd820ef5
| 2020-01-20T08:32:13Z |
python
| 2020-05-29T18:42:44Z |
test/integration/targets/unicode/runme.sh
|
#!/usr/bin/env bash
set -eux
ansible-playbook unicode.yml -i inventory -v -e 'extra_var=café' "$@"
# Test the start-at-task flag #9571
ANSIBLE_HOST_PATTERN_MISMATCH=warning ansible-playbook unicode.yml -i inventory -v --start-at-task '*¶' -e 'start_at_task=True' "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,320 |
Ansible task with become !root fails with py3 (py2 is fine)
|
#### SUMMARY
Ansible task with become !root fails with py3 (py2 is fine).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
become
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.7
config file = /home/danj/git/git.chown.me/ansible/ansible.cfg
configured module search path = ['/home/danj/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/danj/venv/ansible/lib/python3.8/site-packages/ansible
executable location = /home/danj/venv/ansible/bin/ansible
python version = 3.8.2 (default, Apr 27 2020, 15:53:34) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ANSIBLE_PIPELINING(/home/danj/git/git.chown.me/ansible/ansible.cfg) = True
CACHE_PLUGIN(/home/danj/git/git.chown.me/ansible/ansible.cfg) = memory
DEFAULT_ACTION_PLUGIN_PATH(/home/danj/git/git.chown.me/ansible/ansible.cfg) = ['/usr/local/share/ansible_plugins/action_plugins']
DEFAULT_CALLBACK_PLUGIN_PATH(/home/danj/git/git.chown.me/ansible/ansible.cfg) = ['/usr/local/share/ansible_plugins/callback_plugins']
DEFAULT_CONNECTION_PLUGIN_PATH(/home/danj/git/git.chown.me/ansible/ansible.cfg) = ['/usr/local/share/ansible_plugins/connection_plugins']
DEFAULT_FILTER_PLUGIN_PATH(/home/danj/git/git.chown.me/ansible/ansible.cfg) = ['/usr/local/share/ansible_plugins/filter_plugins']
DEFAULT_GATHERING(/home/danj/git/git.chown.me/ansible/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/danj/git/git.chown.me/ansible/ansible.cfg) = ['/home/danj/git/git.chown.me/ansible/hosts']
DEFAULT_LOOKUP_PLUGIN_PATH(/home/danj/git/git.chown.me/ansible/ansible.cfg) = ['/usr/local/share/ansible_plugins/lookup_plugins']
DEFAULT_MANAGED_STR(/home/danj/git/git.chown.me/ansible/ansible.cfg) = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}
DEFAULT_TIMEOUT(/home/danj/git/git.chown.me/ansible/ansible.cfg) = 10
DEFAULT_TRANSPORT(/home/danj/git/git.chown.me/ansible/ansible.cfg) = smart
DEFAULT_VARS_PLUGIN_PATH(/home/danj/git/git.chown.me/ansible/ansible.cfg) = ['/usr/local/share/ansible_plugins/vars_plugins']
RETRY_FILES_ENABLED(/home/danj/git/git.chown.me/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Ansible "client" is ubuntu 20.04, target is OpenBSD -current.
##### STEPS TO REPRODUCE
```yaml
- name: add anoncvs.fr.obsd.org to the known hosts
become_user: danj
become: "yes"
become_method: sudo
lineinfile:
dest: /home/danj/.ssh/known_hosts
line: "anoncvs.fr.openbsd.org,145.238.209.46 ecdsa-sha2-nistp256 [...]"
create: "yes"
tags:
- cvs
- tmpcvs
```
Most stuff is done as root as the priviledge user is required but sometimes I use another user with become/sudo and it fails, whatever the module is (lineinfile like here, or shell etc). The problem is with become.
##### EXPECTED RESULTS
It works with python2 (i.e. the target has ansible_python_interpreter=/usr/local/bin/python2)
```
TASK [cvs : add anoncvs.fr.obsd.org to the known hosts] ***************************************************************************************************************************************
task path: /home/danj/git/git.chown.me/ansible/roles/cvs/tasks/main.yml:62
Using module file /home/danj/venv/ansible/lib/python3.8/site-packages/ansible/modules/files/lineinfile.py
Pipelining is enabled.
<virtie-root> ESTABLISH SSH CONNECTION FOR USER: root
<virtie-root> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/home/danj/.ansible/cp/1f07b313ca virtie-root '/bin/sh -c '"'"'sudo -H -S -n -u danj /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-myjuvnfghfjvpjggsaqefxmykdpxfqqb ; /usr/local/bin/python2'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<virtie-root> (0, b'\n{"msg": "", "diff": [{"after": "removed for privacy reason", "src": null, "seuser": null, "delimiter": null, "mode": null, "firstmatch": false, "attributes": null, "backup": false}}}\n', b'')
ok: [virtie-root] => {
"backup": "",
"changed": false,
"diff": [
{
"after": "[removed for privacy reasons]",
"after_header": "/home/danj/.ssh/known_hosts (content)",
"before": "[removed for privacy reasons]",
"before_header": "/home/danj/.ssh/known_hosts (content)"
},
{
"after_header": "/home/danj/.ssh/known_hosts (file attributes)",
"before_header": "/home/danj/.ssh/known_hosts (file attributes)"
}
],
"invocation": {
"module_args": {
"attributes": null,
"backrefs": false,
"backup": false,
"content": null,
"create": true,
"delimiter": null,
"dest": "/home/danj/.ssh/known_hosts",
"directory_mode": null,
"firstmatch": false,
"follow": false,
"force": null,
"group": null,
"insertafter": null,
"insertbefore": null,
"line": "anoncvs.fr.openbsd.org,145.238.209.46 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIT93hmb9QFu8r8ZxbGk6xXKptPdFwg2xM0ClkQWqKuSXBPPDo6FSOdtUlfzJwaaWBnp+L+6SKJJZqLjepbfNyQ=",
"mode": null,
"owner": null,
"path": "/home/danj/.ssh/known_hosts",
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"state": "present",
"unsafe_writes": null,
"validate": null
}
},
"msg": ""
}
```
##### ACTUAL RESULTS
With -vvv: (if I add more verbosity, it doesn't pretty print the stack trace so it's as readable as you can expect, i.e. it's not)
```
TASK [cvs : add anoncvs.fr.obsd.org to the known hosts] ***************************************************************************************************************************************
task path: /home/danj/git/git.chown.me/ansible/roles/cvs/tasks/main.yml:62
Using module file /home/danj/venv/ansible/lib/python3.8/site-packages/ansible/modules/files/lineinfile.py
Pipelining is enabled.
<virtie-root> ESTABLISH SSH CONNECTION FOR USER: root
<virtie-root> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/home/danj/.ansible/cp/1f07b313ca virtie-root '/bin/sh -c '"'"'sudo -H -S -n -u danj /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-ybempfbcktyjiqpnfzkflihnyxlyekjq ; /usr/local/bin/python3'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<virtie-root> (1, b'', b'Traceback (most recent call last):\n File "<stdin>", line 102, in <module>\n File "<stdin>", line 17, in _ansiballz_main\n File "<frozen importlib._bootstrap>", line 983, in _find_and_load\n File "<frozen importlib._bootstrap>", line 963, in _find_and_load_unlocked\n File "<frozen importlib._bootstrap>", line 906, in _find_spec\n File "<frozen importlib._bootstrap_external>", line 1280, in find_spec\n File "<frozen importlib._bootstrap_external>", line 1249, in _get_spec\n File "<frozen importlib._bootstrap_external>", line 1213, in _path_importer_cache\nPermissionError: [Errno 13] Permission denied\n')
<virtie-root> Failed to connect to the host via ssh: Traceback (most recent call last):
File "<stdin>", line 102, in <module>
File "<stdin>", line 17, in _ansiballz_main
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 963, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 906, in _find_spec
File "<frozen importlib._bootstrap_external>", line 1280, in find_spec
File "<frozen importlib._bootstrap_external>", line 1249, in _get_spec
File "<frozen importlib._bootstrap_external>", line 1213, in _path_importer_cache
PermissionError: [Errno 13] Permission denied
The full traceback is:
Traceback (most recent call last):
File "<stdin>", line 102, in <module>
File "<stdin>", line 17, in _ansiballz_main
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 963, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 906, in _find_spec
File "<frozen importlib._bootstrap_external>", line 1280, in find_spec
File "<frozen importlib._bootstrap_external>", line 1249, in _get_spec
File "<frozen importlib._bootstrap_external>", line 1213, in _path_importer_cache
PermissionError: [Errno 13] Permission denied
fatal: [virtie-root]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 102, in <module>\n File \"<stdin>\", line 17, in _ansiballz_main\n File \"<frozen importlib._bootstrap>\", line 983, in _find_and_load\n File \"<frozen importlib._bootstrap>\", line 963, in _find_and_load_unlocked\n File \"<frozen importlib._bootstrap>\", line 906, in _find_spec\n File \"<frozen importlib._bootstrap_external>\", line 1280, in find_spec\n File \"<frozen importlib._bootstrap_external>\", line 1249, in _get_spec\n File \"<frozen importlib._bootstrap_external>\", line 1213, in _path_importer_cache\nPermissionError: [Errno 13] Permission denied\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
Thanks!
|
https://github.com/ansible/ansible/issues/69320
|
https://github.com/ansible/ansible/pull/69342
|
79ab7984272120a5444bee9a0a1ea6e799789696
|
2abaf320d746c8680a0ce595ad0de93639c7e539
| 2020-05-04T19:48:27Z |
python
| 2020-06-01T08:43:20Z |
changelogs/fragments/69320-sys-path-cwd.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,320 |
Ansible task with become !root fails with py3 (py2 is fine)
|
#### SUMMARY
Ansible task with become !root fails with py3 (py2 is fine).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
become
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.7
config file = /home/danj/git/git.chown.me/ansible/ansible.cfg
configured module search path = ['/home/danj/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/danj/venv/ansible/lib/python3.8/site-packages/ansible
executable location = /home/danj/venv/ansible/bin/ansible
python version = 3.8.2 (default, Apr 27 2020, 15:53:34) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ANSIBLE_PIPELINING(/home/danj/git/git.chown.me/ansible/ansible.cfg) = True
CACHE_PLUGIN(/home/danj/git/git.chown.me/ansible/ansible.cfg) = memory
DEFAULT_ACTION_PLUGIN_PATH(/home/danj/git/git.chown.me/ansible/ansible.cfg) = ['/usr/local/share/ansible_plugins/action_plugins']
DEFAULT_CALLBACK_PLUGIN_PATH(/home/danj/git/git.chown.me/ansible/ansible.cfg) = ['/usr/local/share/ansible_plugins/callback_plugins']
DEFAULT_CONNECTION_PLUGIN_PATH(/home/danj/git/git.chown.me/ansible/ansible.cfg) = ['/usr/local/share/ansible_plugins/connection_plugins']
DEFAULT_FILTER_PLUGIN_PATH(/home/danj/git/git.chown.me/ansible/ansible.cfg) = ['/usr/local/share/ansible_plugins/filter_plugins']
DEFAULT_GATHERING(/home/danj/git/git.chown.me/ansible/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/danj/git/git.chown.me/ansible/ansible.cfg) = ['/home/danj/git/git.chown.me/ansible/hosts']
DEFAULT_LOOKUP_PLUGIN_PATH(/home/danj/git/git.chown.me/ansible/ansible.cfg) = ['/usr/local/share/ansible_plugins/lookup_plugins']
DEFAULT_MANAGED_STR(/home/danj/git/git.chown.me/ansible/ansible.cfg) = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}
DEFAULT_TIMEOUT(/home/danj/git/git.chown.me/ansible/ansible.cfg) = 10
DEFAULT_TRANSPORT(/home/danj/git/git.chown.me/ansible/ansible.cfg) = smart
DEFAULT_VARS_PLUGIN_PATH(/home/danj/git/git.chown.me/ansible/ansible.cfg) = ['/usr/local/share/ansible_plugins/vars_plugins']
RETRY_FILES_ENABLED(/home/danj/git/git.chown.me/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Ansible "client" is ubuntu 20.04, target is OpenBSD -current.
##### STEPS TO REPRODUCE
```yaml
- name: add anoncvs.fr.obsd.org to the known hosts
become_user: danj
become: "yes"
become_method: sudo
lineinfile:
dest: /home/danj/.ssh/known_hosts
line: "anoncvs.fr.openbsd.org,145.238.209.46 ecdsa-sha2-nistp256 [...]"
create: "yes"
tags:
- cvs
- tmpcvs
```
Most stuff is done as root as the priviledge user is required but sometimes I use another user with become/sudo and it fails, whatever the module is (lineinfile like here, or shell etc). The problem is with become.
##### EXPECTED RESULTS
It works with python2 (i.e. the target has ansible_python_interpreter=/usr/local/bin/python2)
```
TASK [cvs : add anoncvs.fr.obsd.org to the known hosts] ***************************************************************************************************************************************
task path: /home/danj/git/git.chown.me/ansible/roles/cvs/tasks/main.yml:62
Using module file /home/danj/venv/ansible/lib/python3.8/site-packages/ansible/modules/files/lineinfile.py
Pipelining is enabled.
<virtie-root> ESTABLISH SSH CONNECTION FOR USER: root
<virtie-root> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/home/danj/.ansible/cp/1f07b313ca virtie-root '/bin/sh -c '"'"'sudo -H -S -n -u danj /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-myjuvnfghfjvpjggsaqefxmykdpxfqqb ; /usr/local/bin/python2'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<virtie-root> (0, b'\n{"msg": "", "diff": [{"after": "removed for privacy reason", "src": null, "seuser": null, "delimiter": null, "mode": null, "firstmatch": false, "attributes": null, "backup": false}}}\n', b'')
ok: [virtie-root] => {
"backup": "",
"changed": false,
"diff": [
{
"after": "[removed for privacy reasons]",
"after_header": "/home/danj/.ssh/known_hosts (content)",
"before": "[removed for privacy reasons]",
"before_header": "/home/danj/.ssh/known_hosts (content)"
},
{
"after_header": "/home/danj/.ssh/known_hosts (file attributes)",
"before_header": "/home/danj/.ssh/known_hosts (file attributes)"
}
],
"invocation": {
"module_args": {
"attributes": null,
"backrefs": false,
"backup": false,
"content": null,
"create": true,
"delimiter": null,
"dest": "/home/danj/.ssh/known_hosts",
"directory_mode": null,
"firstmatch": false,
"follow": false,
"force": null,
"group": null,
"insertafter": null,
"insertbefore": null,
"line": "anoncvs.fr.openbsd.org,145.238.209.46 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIT93hmb9QFu8r8ZxbGk6xXKptPdFwg2xM0ClkQWqKuSXBPPDo6FSOdtUlfzJwaaWBnp+L+6SKJJZqLjepbfNyQ=",
"mode": null,
"owner": null,
"path": "/home/danj/.ssh/known_hosts",
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"state": "present",
"unsafe_writes": null,
"validate": null
}
},
"msg": ""
}
```
##### ACTUAL RESULTS
With -vvv: (if I add more verbosity, it doesn't pretty print the stack trace so it's as readable as you can expect, i.e. it's not)
```
TASK [cvs : add anoncvs.fr.obsd.org to the known hosts] ***************************************************************************************************************************************
task path: /home/danj/git/git.chown.me/ansible/roles/cvs/tasks/main.yml:62
Using module file /home/danj/venv/ansible/lib/python3.8/site-packages/ansible/modules/files/lineinfile.py
Pipelining is enabled.
<virtie-root> ESTABLISH SSH CONNECTION FOR USER: root
<virtie-root> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/home/danj/.ansible/cp/1f07b313ca virtie-root '/bin/sh -c '"'"'sudo -H -S -n -u danj /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-ybempfbcktyjiqpnfzkflihnyxlyekjq ; /usr/local/bin/python3'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<virtie-root> (1, b'', b'Traceback (most recent call last):\n File "<stdin>", line 102, in <module>\n File "<stdin>", line 17, in _ansiballz_main\n File "<frozen importlib._bootstrap>", line 983, in _find_and_load\n File "<frozen importlib._bootstrap>", line 963, in _find_and_load_unlocked\n File "<frozen importlib._bootstrap>", line 906, in _find_spec\n File "<frozen importlib._bootstrap_external>", line 1280, in find_spec\n File "<frozen importlib._bootstrap_external>", line 1249, in _get_spec\n File "<frozen importlib._bootstrap_external>", line 1213, in _path_importer_cache\nPermissionError: [Errno 13] Permission denied\n')
<virtie-root> Failed to connect to the host via ssh: Traceback (most recent call last):
File "<stdin>", line 102, in <module>
File "<stdin>", line 17, in _ansiballz_main
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 963, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 906, in _find_spec
File "<frozen importlib._bootstrap_external>", line 1280, in find_spec
File "<frozen importlib._bootstrap_external>", line 1249, in _get_spec
File "<frozen importlib._bootstrap_external>", line 1213, in _path_importer_cache
PermissionError: [Errno 13] Permission denied
The full traceback is:
Traceback (most recent call last):
File "<stdin>", line 102, in <module>
File "<stdin>", line 17, in _ansiballz_main
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 963, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 906, in _find_spec
File "<frozen importlib._bootstrap_external>", line 1280, in find_spec
File "<frozen importlib._bootstrap_external>", line 1249, in _get_spec
File "<frozen importlib._bootstrap_external>", line 1213, in _path_importer_cache
PermissionError: [Errno 13] Permission denied
fatal: [virtie-root]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 102, in <module>\n File \"<stdin>\", line 17, in _ansiballz_main\n File \"<frozen importlib._bootstrap>\", line 983, in _find_and_load\n File \"<frozen importlib._bootstrap>\", line 963, in _find_and_load_unlocked\n File \"<frozen importlib._bootstrap>\", line 906, in _find_spec\n File \"<frozen importlib._bootstrap_external>\", line 1280, in find_spec\n File \"<frozen importlib._bootstrap_external>\", line 1249, in _get_spec\n File \"<frozen importlib._bootstrap_external>\", line 1213, in _path_importer_cache\nPermissionError: [Errno 13] Permission denied\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
Thanks!
|
https://github.com/ansible/ansible/issues/69320
|
https://github.com/ansible/ansible/pull/69342
|
79ab7984272120a5444bee9a0a1ea6e799789696
|
2abaf320d746c8680a0ce595ad0de93639c7e539
| 2020-05-04T19:48:27Z |
python
| 2020-06-01T08:43:20Z |
docs/docsite/rst/porting_guides/porting_guide_2.10.rst
|
.. _porting_2.10_guide:
**************************
Ansible 2.10 Porting Guide
**************************
.. warning::
Links on this page may not point to the most recent versions of modules. In preparation for the release of 2.10, many plugins and modules have migrated to Collections on `Ansible Galaxy <https://galaxy.ansible.com>`_. For the current development status of Collections and FAQ see `Ansible Collections Community Guide <https://github.com/ansible-collections/general/blob/master/README.rst>`_. We expect the 2.10 Porting Guide to change frequently up to the 2.10 release. Follow the conversations about collections on our various :ref:`communication` channels for the latest information on the status of the ``devel`` branch.
This section discusses the behavioral changes between Ansible 2.9 and Ansible 2.10.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `Ansible Changelog for 2.10 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.10.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
* Fixed a bug on boolean keywords that made random strings return 'False', now they should return an error if they are not a proper boolean
Example: `diff: yes-` was returning `False`.
* A new fact, ``ansible_processor_nproc`` reflects the number of vcpus
available to processes (falls back to the number of vcpus available to
the scheduler).
Command Line
============
No notable changes
Deprecated
==========
* Windows Server 2008 and 2008 R2 will no longer be supported or tested in the next Ansible release, see :ref:`windows_faq_server2008`.
* The :ref:`win_stat <win_stat_module>` module has removed the deprecated ``get_md55`` option and ``md5`` return value.
* The :ref:`win_psexec <win_psexec_module>` module has removed the deprecated ``extra_opts`` option.
Modules
=======
.. warning::
Links on this page may not point to the most recent versions of modules. We will update them when we can.
Modules removed
---------------
The following modules no longer exist:
* letsencrypt use :ref:`acme_certificate <acme_certificate_module>` instead.
Deprecation notices
-------------------
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ldap_attr use ldap_attrs instead.
* vyos_static_route use vyos_static_routes instead.
The following functionality will be removed in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`openssl_csr <openssl_csr_module>` module's option ``version`` no longer supports values other than ``1`` (the current only standardized CSR version).
* :ref:`docker_container <docker_container_module>`: the ``trust_image_content`` option will be removed. It has always been ignored by the module.
* :ref:`iam_managed_policy <iam_managed_policy_module>`: the ``fail_on_delete`` option will be removed. It has always been ignored by the module.
* :ref:`s3_lifecycle <s3_lifecycle_module>`: the ``requester_pays`` option will be removed. It has always been ignored by the module.
* :ref:`s3_sync <s3_sync_module>`: the ``retries`` option will be removed. It has always been ignored by the module.
* The return values ``err`` and ``out`` of :ref:`docker_stack <docker_stack_module>` have been deprecated. Use ``stdout`` and ``stderr`` from now on instead.
* :ref:`cloudformation <cloudformation_module>`: the ``template_format`` option will be removed. It has been ignored by the module since Ansible 2.3.
* :ref:`data_pipeline <data_pipeline_module>`: the ``version`` option will be removed. It has always been ignored by the module.
* :ref:`ec2_eip <ec2_eip_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.3.
* :ref:`ec2_key <ec2_key_module>`: the ``wait`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_key <ec2_key_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_lc <ec2_lc_module>`: the ``associate_public_ip_address`` option will be removed. It has always been ignored by the module.
* :ref:`ec2_tag <ec2_tag_module>`: Support for ``list`` as a state has been deprecated. The ``ec2_tag_info`` can be used to fetch the tags on an EC2 resource.
* :ref:`iam_policy <iam_policy_module>`: the ``policy_document`` option will be removed. To maintain the existing behavior use the ``policy_json`` option and read the file with the ``lookup`` plugin.
* :ref:`redfish_config <redfish_config_module>`: the ``bios_attribute_name`` and ``bios_attribute_value`` options will be removed. To maintain the existing behavior use the ``bios_attributes`` option instead.
* :ref:`clc_aa_policy <clc_aa_policy_module>`: the ``wait`` parameter will be removed. It has always been ignored by the module.
* :ref:`redfish_config <redfish_config_module>`, :ref:`redfish_command <redfish_command_module>`: the behavior to select the first System, Manager, or Chassis resource to modify when multiple are present will be removed. Use the new ``resource_id`` option to specify target resource to modify.
* :ref:`win_domain_controller <win_domain_controller_module>`: the ``log_path`` option will be removed. This was undocumented and only related to debugging information for module development.
* :ref:`win_package <win_package_module>`: the ``username`` and ``password`` options will be removed. The same functionality can be done by using ``become: yes`` and ``become_flags: logon_type=new_credentials logon_flags=netcredentials_only`` on the task.
* :ref:`win_package <win_package_module>`: the ``ensure`` alias for the ``state`` option will be removed. Please use ``state`` instead of ``ensure``.
* :ref:`win_package <win_package_module>`: the ``productid`` alias for the ``product_id`` option will be removed. Please use ``product_id`` instead of ``productid``.
The following functionality will change in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`docker_container <docker_container_module>` module has a new option, ``container_default_behavior``, whose default value will change from ``compatibility`` to ``no_defaults``. Set to an explicit value to avoid deprecation warnings.
* The :ref:`docker_container <docker_container_module>` module's ``network_mode`` option will be set by default to the name of the first network in ``networks`` if at least one network is given and ``networks_cli_compatible`` is ``true`` (will be default from Ansible 2.12 on). Set to an explicit value to avoid deprecation warnings if you specify networks and set ``networks_cli_compatible`` to ``true``. The current default (not specifying it) is equivalent to the value ``default``.
* :ref:`ec2 <ec2_module>`: the ``group`` and ``group_id`` options will become mutually exclusive. Currently ``group_id`` is ignored if you pass both.
* :ref:`iam_policy <iam_policy_module>`: the default value for the ``skip_duplicates`` option will change from ``true`` to ``false``. To maintain the existing behavior explicitly set it to ``true``.
* :ref:`iam_role <iam_role_module>`: the ``purge_policies`` option (also know as ``purge_policy``) default value will change from ``true`` to ``false``
* :ref:`elb_network_lb <elb_network_lb_module>`: the default behaviour for the ``state`` option will change from ``absent`` to ``present``. To maintain the existing behavior explicitly set state to ``absent``.
* :ref:`vmware_tag_info <vmware_tag_info_module>`: the module will not return ``tag_facts`` since it does not return multiple tags with the same name and different category id. To maintain the existing behavior use ``tag_info`` which is a list of tag metadata.
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ``vmware_dns_config`` use vmware_host_dns instead.
Noteworthy module changes
-------------------------
* The ``datacenter`` option has been removed from :ref:`vmware_guest_find <vmware_guest_find_module>`
* The options ``ip_address`` and ``subnet_mask`` have been removed from :ref:`vmware_vmkernel <vmware_vmkernel_module>`; use the suboptions ``ip_address`` and ``subnet_mask`` of the ``network`` option instead.
* Ansible modules created with ``add_file_common_args=True`` added a number of undocumented arguments which were mostly there to ease implementing certain action plugins. The undocumented arguments ``src``, ``follow``, ``force``, ``content``, ``backup``, ``remote_src``, ``regexp``, ``delimiter``, and ``directory_mode`` are now no longer added. Modules relying on these options to be added need to specify them by themselves.
* The ``AWSRetry`` decorator no longer catches ``NotFound`` exceptions by default. ``NotFound`` exceptions need to be explicitly added using ``catch_extra_error_codes``. Some AWS modules may see an increase in transient failures due to AWS's eventual consistency model.
* :ref:`vmware_datastore_maintenancemode <vmware_datastore_maintenancemode_module>` now returns ``datastore_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_kernel_manager <vmware_host_kernel_manager_module>` now returns ``host_kernel_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_ntp <vmware_host_ntp_module>` now returns ``host_ntp_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_service_manager <vmware_host_service_manager_module>` now returns ``host_service_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_tag <vmware_tag_module>` now returns ``tag_status`` instead of Ansible internal key ``results``.
* The deprecated ``recurse`` option in :ref:`pacman <pacman_module>` module has been removed, you should use ``extra_args=--recursive`` instead.
* :ref:`vmware_guest_custom_attributes <vmware_guest_custom_attributes_module>` module does not require VM name which was a required parameter for releases prior to Ansible 2.10.
* :ref:`zabbix_action <zabbix_action_module>` no longer requires ``esc_period`` and ``event_source`` arguments when ``state=absent``.
* :ref:`zabbix_proxy <zabbix_proxy_module>` deprecates ``interface`` sub-options ``type`` and ``main`` when proxy type is set to passive via ``status=passive``. Make sure these suboptions are removed from your playbook as they were never supported by Zabbix in the first place.
* :ref:`gitlab_user <gitlab_user_module>` no longer requires ``name``, ``email`` and ``password`` arguments when ``state=absent``.
* :ref:`win_pester <win_pester_module>` no longer runs all ``*.ps1`` file in the directory specified due to it executing potentially unknown scripts. It will follow the default behaviour of only running tests for files that are like ``*.tests.ps1`` which is built into Pester itself
* :ref:`win_find <win_find_module>` has been refactored to better match the behaviour of the ``find`` module. Here is what has changed:
* When the directory specified by ``paths`` does not exist or is a file, it will no longer fail and will just warn the user
* Junction points are no longer reported as ``islnk``, use ``isjunction`` to properly report these files. This behaviour matches the :ref:`win_stat <win_stat_module>`
* Directories no longer return a ``size``, this matches the ``stat`` and ``find`` behaviour and has been removed due to the difficulties in correctly reporting the size of a directory
* :ref:`docker_container <docker_container_module>` no longer passes information on non-anonymous volumes or binds as ``Volumes`` to the Docker daemon. This increases compatibility with the ``docker`` CLI program. Note that if you specify ``volumes: strict`` in ``comparisons``, this could cause existing containers created with docker_container from Ansible 2.9 or earlier to restart.
* :ref:`docker_container <docker_container_module>`'s support for port ranges was adjusted to be more compatible to the ``docker`` command line utility: a one-port container range combined with a multiple-port host range will no longer result in only the first host port be used, but the whole range being passed to Docker so that a free port in that range will be used.
* :ref:`purefb_fs <purefb_fs_module>` no longer supports the deprecated ``nfs`` option. This has been superceeded by ``nfsv3``.
* :ref:`nxos_igmp_interface <nxos_igmp_interface_module>` no longer supports the deprecated ``oif_prefix`` and ``oif_source`` options. These have been superceeded by ``oif_ps``.
* :ref:`aws_s3 <aws_s3_module>` can now delete versioned buckets even when they are not empty - set mode to delete to delete a versioned bucket and everything in it.
* The parameter ``message`` in :ref:`grafana_dashboard <grafana_dashboard_module>` module is renamed to ``commit_message`` since ``message`` is used by Ansible Core engine internally.
* The parameter ``message`` in :ref:`datadog_monitor <datadog_monitor_module>` module is renamed to ``notification_message`` since ``message`` is used by Ansible Core engine internally.
* The parameter ``message`` in :ref:`bigpanda <bigpanda_module>` module is renamed to ``deployment_message`` since ``message`` is used by Ansible Core engine internally.
Plugins
=======
Lookup plugin names case-sensitivity
------------------------------------
* Prior to Ansible ``2.10`` lookup plugin names passed in as an argument to the ``lookup()`` function were treated as case-insensitive as opposed to lookups invoked via ``with_<lookup_name>``. ``2.10`` brings consistency to ``lookup()`` and ``with_`` to be both case-sensitive.
Noteworthy plugin changes
-------------------------
* Cache plugins in collections can be used to cache data from inventory plugins. Previously, cache plugins in collections could only be used for fact caching.
* The ``hashi_vault`` lookup plugin now returns the latest version when using the KV v2 secrets engine. Previously, it returned all versions of the secret which required additional steps to extract and filter the desired version.
* Some undocumented arguments from ``FILE_COMMON_ARGUMENTS`` have been removed; plugins using these, in particular action plugins, need to be adjusted. The undocumented arguments which were removed are ``src``, ``follow``, ``force``, ``content``, ``backup``, ``remote_src``, ``regexp``, ``delimiter``, and ``directory_mode``.
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,320 |
Ansible task with become !root fails with py3 (py2 is fine)
|
#### SUMMARY
Ansible task with become !root fails with py3 (py2 is fine).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
become
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.7
config file = /home/danj/git/git.chown.me/ansible/ansible.cfg
configured module search path = ['/home/danj/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/danj/venv/ansible/lib/python3.8/site-packages/ansible
executable location = /home/danj/venv/ansible/bin/ansible
python version = 3.8.2 (default, Apr 27 2020, 15:53:34) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ANSIBLE_PIPELINING(/home/danj/git/git.chown.me/ansible/ansible.cfg) = True
CACHE_PLUGIN(/home/danj/git/git.chown.me/ansible/ansible.cfg) = memory
DEFAULT_ACTION_PLUGIN_PATH(/home/danj/git/git.chown.me/ansible/ansible.cfg) = ['/usr/local/share/ansible_plugins/action_plugins']
DEFAULT_CALLBACK_PLUGIN_PATH(/home/danj/git/git.chown.me/ansible/ansible.cfg) = ['/usr/local/share/ansible_plugins/callback_plugins']
DEFAULT_CONNECTION_PLUGIN_PATH(/home/danj/git/git.chown.me/ansible/ansible.cfg) = ['/usr/local/share/ansible_plugins/connection_plugins']
DEFAULT_FILTER_PLUGIN_PATH(/home/danj/git/git.chown.me/ansible/ansible.cfg) = ['/usr/local/share/ansible_plugins/filter_plugins']
DEFAULT_GATHERING(/home/danj/git/git.chown.me/ansible/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/danj/git/git.chown.me/ansible/ansible.cfg) = ['/home/danj/git/git.chown.me/ansible/hosts']
DEFAULT_LOOKUP_PLUGIN_PATH(/home/danj/git/git.chown.me/ansible/ansible.cfg) = ['/usr/local/share/ansible_plugins/lookup_plugins']
DEFAULT_MANAGED_STR(/home/danj/git/git.chown.me/ansible/ansible.cfg) = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}
DEFAULT_TIMEOUT(/home/danj/git/git.chown.me/ansible/ansible.cfg) = 10
DEFAULT_TRANSPORT(/home/danj/git/git.chown.me/ansible/ansible.cfg) = smart
DEFAULT_VARS_PLUGIN_PATH(/home/danj/git/git.chown.me/ansible/ansible.cfg) = ['/usr/local/share/ansible_plugins/vars_plugins']
RETRY_FILES_ENABLED(/home/danj/git/git.chown.me/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Ansible "client" is ubuntu 20.04, target is OpenBSD -current.
##### STEPS TO REPRODUCE
```yaml
- name: add anoncvs.fr.obsd.org to the known hosts
become_user: danj
become: "yes"
become_method: sudo
lineinfile:
dest: /home/danj/.ssh/known_hosts
line: "anoncvs.fr.openbsd.org,145.238.209.46 ecdsa-sha2-nistp256 [...]"
create: "yes"
tags:
- cvs
- tmpcvs
```
Most stuff is done as root as the priviledge user is required but sometimes I use another user with become/sudo and it fails, whatever the module is (lineinfile like here, or shell etc). The problem is with become.
##### EXPECTED RESULTS
It works with python2 (i.e. the target has ansible_python_interpreter=/usr/local/bin/python2)
```
TASK [cvs : add anoncvs.fr.obsd.org to the known hosts] ***************************************************************************************************************************************
task path: /home/danj/git/git.chown.me/ansible/roles/cvs/tasks/main.yml:62
Using module file /home/danj/venv/ansible/lib/python3.8/site-packages/ansible/modules/files/lineinfile.py
Pipelining is enabled.
<virtie-root> ESTABLISH SSH CONNECTION FOR USER: root
<virtie-root> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/home/danj/.ansible/cp/1f07b313ca virtie-root '/bin/sh -c '"'"'sudo -H -S -n -u danj /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-myjuvnfghfjvpjggsaqefxmykdpxfqqb ; /usr/local/bin/python2'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<virtie-root> (0, b'\n{"msg": "", "diff": [{"after": "removed for privacy reason", "src": null, "seuser": null, "delimiter": null, "mode": null, "firstmatch": false, "attributes": null, "backup": false}}}\n', b'')
ok: [virtie-root] => {
"backup": "",
"changed": false,
"diff": [
{
"after": "[removed for privacy reasons]",
"after_header": "/home/danj/.ssh/known_hosts (content)",
"before": "[removed for privacy reasons]",
"before_header": "/home/danj/.ssh/known_hosts (content)"
},
{
"after_header": "/home/danj/.ssh/known_hosts (file attributes)",
"before_header": "/home/danj/.ssh/known_hosts (file attributes)"
}
],
"invocation": {
"module_args": {
"attributes": null,
"backrefs": false,
"backup": false,
"content": null,
"create": true,
"delimiter": null,
"dest": "/home/danj/.ssh/known_hosts",
"directory_mode": null,
"firstmatch": false,
"follow": false,
"force": null,
"group": null,
"insertafter": null,
"insertbefore": null,
"line": "anoncvs.fr.openbsd.org,145.238.209.46 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIT93hmb9QFu8r8ZxbGk6xXKptPdFwg2xM0ClkQWqKuSXBPPDo6FSOdtUlfzJwaaWBnp+L+6SKJJZqLjepbfNyQ=",
"mode": null,
"owner": null,
"path": "/home/danj/.ssh/known_hosts",
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"state": "present",
"unsafe_writes": null,
"validate": null
}
},
"msg": ""
}
```
##### ACTUAL RESULTS
With -vvv: (if I add more verbosity, it doesn't pretty print the stack trace so it's as readable as you can expect, i.e. it's not)
```
TASK [cvs : add anoncvs.fr.obsd.org to the known hosts] ***************************************************************************************************************************************
task path: /home/danj/git/git.chown.me/ansible/roles/cvs/tasks/main.yml:62
Using module file /home/danj/venv/ansible/lib/python3.8/site-packages/ansible/modules/files/lineinfile.py
Pipelining is enabled.
<virtie-root> ESTABLISH SSH CONNECTION FOR USER: root
<virtie-root> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/home/danj/.ansible/cp/1f07b313ca virtie-root '/bin/sh -c '"'"'sudo -H -S -n -u danj /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-ybempfbcktyjiqpnfzkflihnyxlyekjq ; /usr/local/bin/python3'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<virtie-root> (1, b'', b'Traceback (most recent call last):\n File "<stdin>", line 102, in <module>\n File "<stdin>", line 17, in _ansiballz_main\n File "<frozen importlib._bootstrap>", line 983, in _find_and_load\n File "<frozen importlib._bootstrap>", line 963, in _find_and_load_unlocked\n File "<frozen importlib._bootstrap>", line 906, in _find_spec\n File "<frozen importlib._bootstrap_external>", line 1280, in find_spec\n File "<frozen importlib._bootstrap_external>", line 1249, in _get_spec\n File "<frozen importlib._bootstrap_external>", line 1213, in _path_importer_cache\nPermissionError: [Errno 13] Permission denied\n')
<virtie-root> Failed to connect to the host via ssh: Traceback (most recent call last):
File "<stdin>", line 102, in <module>
File "<stdin>", line 17, in _ansiballz_main
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 963, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 906, in _find_spec
File "<frozen importlib._bootstrap_external>", line 1280, in find_spec
File "<frozen importlib._bootstrap_external>", line 1249, in _get_spec
File "<frozen importlib._bootstrap_external>", line 1213, in _path_importer_cache
PermissionError: [Errno 13] Permission denied
The full traceback is:
Traceback (most recent call last):
File "<stdin>", line 102, in <module>
File "<stdin>", line 17, in _ansiballz_main
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 963, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 906, in _find_spec
File "<frozen importlib._bootstrap_external>", line 1280, in find_spec
File "<frozen importlib._bootstrap_external>", line 1249, in _get_spec
File "<frozen importlib._bootstrap_external>", line 1213, in _path_importer_cache
PermissionError: [Errno 13] Permission denied
fatal: [virtie-root]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 102, in <module>\n File \"<stdin>\", line 17, in _ansiballz_main\n File \"<frozen importlib._bootstrap>\", line 983, in _find_and_load\n File \"<frozen importlib._bootstrap>\", line 963, in _find_and_load_unlocked\n File \"<frozen importlib._bootstrap>\", line 906, in _find_spec\n File \"<frozen importlib._bootstrap_external>\", line 1280, in find_spec\n File \"<frozen importlib._bootstrap_external>\", line 1249, in _get_spec\n File \"<frozen importlib._bootstrap_external>\", line 1213, in _path_importer_cache\nPermissionError: [Errno 13] Permission denied\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
Thanks!
|
https://github.com/ansible/ansible/issues/69320
|
https://github.com/ansible/ansible/pull/69342
|
79ab7984272120a5444bee9a0a1ea6e799789696
|
2abaf320d746c8680a0ce595ad0de93639c7e539
| 2020-05-04T19:48:27Z |
python
| 2020-06-01T08:43:20Z |
lib/ansible/executor/module_common.py
|
# (c) 2013-2014, Michael DeHaan <[email protected]>
# (c) 2015 Toshio Kuratomi <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ast
import base64
import datetime
import json
import os
import shlex
import zipfile
import re
import pkgutil
from io import BytesIO
from ansible.release import __version__, __author__
from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.executor.interpreter_discovery import InterpreterDiscoveryRequiredError
from ansible.executor.powershell import module_manifest as ps_manifest
from ansible.module_utils.common.text.converters import to_bytes, to_text, to_native
from ansible.plugins.loader import module_utils_loader
from ansible.utils.collection_loader._collection_finder import _get_collection_metadata
# Must import strategy and use write_locks from there
# If we import write_locks directly then we end up binding a
# variable to the object and then it never gets updated.
from ansible.executor import action_write_locks
from ansible.utils.display import Display
try:
import importlib.util
import importlib.machinery
imp = None
except ImportError:
import imp
# if we're on a Python that doesn't have FNFError, redefine it as IOError (since that's what we'll see)
try:
FileNotFoundError
except NameError:
FileNotFoundError = IOError
display = Display()
REPLACER = b"#<<INCLUDE_ANSIBLE_MODULE_COMMON>>"
REPLACER_VERSION = b"\"<<ANSIBLE_VERSION>>\""
REPLACER_COMPLEX = b"\"<<INCLUDE_ANSIBLE_MODULE_COMPLEX_ARGS>>\""
REPLACER_WINDOWS = b"# POWERSHELL_COMMON"
REPLACER_JSONARGS = b"<<INCLUDE_ANSIBLE_MODULE_JSON_ARGS>>"
REPLACER_SELINUX = b"<<SELINUX_SPECIAL_FILESYSTEMS>>"
# We could end up writing out parameters with unicode characters so we need to
# specify an encoding for the python source file
ENCODING_STRING = u'# -*- coding: utf-8 -*-'
b_ENCODING_STRING = b'# -*- coding: utf-8 -*-'
# module_common is relative to module_utils, so fix the path
_MODULE_UTILS_PATH = os.path.join(os.path.dirname(__file__), '..', 'module_utils')
# ******************************************************************************
ANSIBALLZ_TEMPLATE = u'''%(shebang)s
%(coding)s
_ANSIBALLZ_WRAPPER = True # For test-module.py script to tell this is a ANSIBALLZ_WRAPPER
# This code is part of Ansible, but is an independent component.
# The code in this particular templatable string, and this templatable string
# only, is BSD licensed. Modules which end up using this snippet, which is
# dynamically combined together by Ansible still belong to the author of the
# module, and they may assign their own license to the complete work.
#
# Copyright (c), James Cammarata, 2016
# Copyright (c), Toshio Kuratomi, 2016
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
def _ansiballz_main():
%(rlimit)s
import os
import os.path
import sys
import __main__
# For some distros and python versions we pick up this script in the temporary
# directory. This leads to problems when the ansible module masks a python
# library that another import needs. We have not figured out what about the
# specific distros and python versions causes this to behave differently.
#
# Tested distros:
# Fedora23 with python3.4 Works
# Ubuntu15.10 with python2.7 Works
# Ubuntu15.10 with python3.4 Fails without this
# Ubuntu16.04.1 with python3.5 Fails without this
# To test on another platform:
# * use the copy module (since this shadows the stdlib copy module)
# * Turn off pipelining
# * Make sure that the destination file does not exist
# * ansible ubuntu16-test -m copy -a 'src=/etc/motd dest=/var/tmp/m'
# This will traceback in shutil. Looking at the complete traceback will show
# that shutil is importing copy which finds the ansible module instead of the
# stdlib module
scriptdir = None
try:
scriptdir = os.path.dirname(os.path.realpath(__main__.__file__))
except (AttributeError, OSError):
# Some platforms don't set __file__ when reading from stdin
# OSX raises OSError if using abspath() in a directory we don't have
# permission to read (realpath calls abspath)
pass
if scriptdir is not None:
sys.path = [p for p in sys.path if p != scriptdir]
import base64
import runpy
import shutil
import tempfile
import zipfile
if sys.version_info < (3,):
PY3 = False
else:
PY3 = True
ZIPDATA = """%(zipdata)s"""
# Note: temp_path isn't needed once we switch to zipimport
def invoke_module(modlib_path, temp_path, json_params):
# When installed via setuptools (including python setup.py install),
# ansible may be installed with an easy-install.pth file. That file
# may load the system-wide install of ansible rather than the one in
# the module. sitecustomize is the only way to override that setting.
z = zipfile.ZipFile(modlib_path, mode='a')
# py3: modlib_path will be text, py2: it's bytes. Need bytes at the end
sitecustomize = u'import sys\\nsys.path.insert(0,"%%s")\\n' %% modlib_path
sitecustomize = sitecustomize.encode('utf-8')
# Use a ZipInfo to work around zipfile limitation on hosts with
# clocks set to a pre-1980 year (for instance, Raspberry Pi)
zinfo = zipfile.ZipInfo()
zinfo.filename = 'sitecustomize.py'
zinfo.date_time = ( %(year)i, %(month)i, %(day)i, %(hour)i, %(minute)i, %(second)i)
z.writestr(zinfo, sitecustomize)
z.close()
# Put the zipped up module_utils we got from the controller first in the python path so that we
# can monkeypatch the right basic
sys.path.insert(0, modlib_path)
# Monkeypatch the parameters into basic
from ansible.module_utils import basic
basic._ANSIBLE_ARGS = json_params
%(coverage)s
# Run the module! By importing it as '__main__', it thinks it is executing as a script
runpy.run_module(mod_name='%(module_fqn)s', init_globals=None, run_name='__main__', alter_sys=True)
# Ansible modules must exit themselves
print('{"msg": "New-style module did not handle its own exit", "failed": true}')
sys.exit(1)
def debug(command, zipped_mod, json_params):
# The code here normally doesn't run. It's only used for debugging on the
# remote machine.
#
# The subcommands in this function make it easier to debug ansiballz
# modules. Here's the basic steps:
#
# Run ansible with the environment variable: ANSIBLE_KEEP_REMOTE_FILES=1 and -vvv
# to save the module file remotely::
# $ ANSIBLE_KEEP_REMOTE_FILES=1 ansible host1 -m ping -a 'data=october' -vvv
#
# Part of the verbose output will tell you where on the remote machine the
# module was written to::
# [...]
# <host1> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o
# PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o
# ControlPath=/home/badger/.ansible/cp/ansible-ssh-%%h-%%p-%%r -tt rhel7 '/bin/sh -c '"'"'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8
# LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/badger/.ansible/tmp/ansible-tmp-1461173013.93-9076457629738/ping'"'"''
# [...]
#
# Login to the remote machine and run the module file via from the previous
# step with the explode subcommand to extract the module payload into
# source files::
# $ ssh host1
# $ /usr/bin/python /home/badger/.ansible/tmp/ansible-tmp-1461173013.93-9076457629738/ping explode
# Module expanded into:
# /home/badger/.ansible/tmp/ansible-tmp-1461173408.08-279692652635227/ansible
#
# You can now edit the source files to instrument the code or experiment with
# different parameter values. When you're ready to run the code you've modified
# (instead of the code from the actual zipped module), use the execute subcommand like this::
# $ /usr/bin/python /home/badger/.ansible/tmp/ansible-tmp-1461173013.93-9076457629738/ping execute
# Okay to use __file__ here because we're running from a kept file
basedir = os.path.join(os.path.abspath(os.path.dirname(__file__)), 'debug_dir')
args_path = os.path.join(basedir, 'args')
if command == 'excommunicate':
print('The excommunicate debug command is deprecated and will be removed in 2.11. Use execute instead.')
command = 'execute'
if command == 'explode':
# transform the ZIPDATA into an exploded directory of code and then
# print the path to the code. This is an easy way for people to look
# at the code on the remote machine for debugging it in that
# environment
z = zipfile.ZipFile(zipped_mod)
for filename in z.namelist():
if filename.startswith('/'):
raise Exception('Something wrong with this module zip file: should not contain absolute paths')
dest_filename = os.path.join(basedir, filename)
if dest_filename.endswith(os.path.sep) and not os.path.exists(dest_filename):
os.makedirs(dest_filename)
else:
directory = os.path.dirname(dest_filename)
if not os.path.exists(directory):
os.makedirs(directory)
f = open(dest_filename, 'wb')
f.write(z.read(filename))
f.close()
# write the args file
f = open(args_path, 'wb')
f.write(json_params)
f.close()
print('Module expanded into:')
print('%%s' %% basedir)
exitcode = 0
elif command == 'execute':
# Execute the exploded code instead of executing the module from the
# embedded ZIPDATA. This allows people to easily run their modified
# code on the remote machine to see how changes will affect it.
# Set pythonpath to the debug dir
sys.path.insert(0, basedir)
# read in the args file which the user may have modified
with open(args_path, 'rb') as f:
json_params = f.read()
# Monkeypatch the parameters into basic
from ansible.module_utils import basic
basic._ANSIBLE_ARGS = json_params
# Run the module! By importing it as '__main__', it thinks it is executing as a script
runpy.run_module(mod_name='%(module_fqn)s', init_globals=None, run_name='__main__', alter_sys=True)
# Ansible modules must exit themselves
print('{"msg": "New-style module did not handle its own exit", "failed": true}')
sys.exit(1)
else:
print('WARNING: Unknown debug command. Doing nothing.')
exitcode = 0
return exitcode
#
# See comments in the debug() method for information on debugging
#
ANSIBALLZ_PARAMS = %(params)s
if PY3:
ANSIBALLZ_PARAMS = ANSIBALLZ_PARAMS.encode('utf-8')
try:
# There's a race condition with the controller removing the
# remote_tmpdir and this module executing under async. So we cannot
# store this in remote_tmpdir (use system tempdir instead)
# Only need to use [ansible_module]_payload_ in the temp_path until we move to zipimport
# (this helps ansible-test produce coverage stats)
temp_path = tempfile.mkdtemp(prefix='ansible_%(ansible_module)s_payload_')
zipped_mod = os.path.join(temp_path, 'ansible_%(ansible_module)s_payload.zip')
with open(zipped_mod, 'wb') as modlib:
modlib.write(base64.b64decode(ZIPDATA))
if len(sys.argv) == 2:
exitcode = debug(sys.argv[1], zipped_mod, ANSIBALLZ_PARAMS)
else:
# Note: temp_path isn't needed once we switch to zipimport
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
finally:
try:
shutil.rmtree(temp_path)
except (NameError, OSError):
# tempdir creation probably failed
pass
sys.exit(exitcode)
if __name__ == '__main__':
_ansiballz_main()
'''
ANSIBALLZ_COVERAGE_TEMPLATE = '''
# Access to the working directory is required by coverage.
# Some platforms, such as macOS, may not allow querying the working directory when using become to drop privileges.
try:
os.getcwd()
except OSError:
os.chdir('/')
os.environ['COVERAGE_FILE'] = '%(coverage_output)s'
import atexit
try:
import coverage
except ImportError:
print('{"msg": "Could not import `coverage` module.", "failed": true}')
sys.exit(1)
cov = coverage.Coverage(config_file='%(coverage_config)s')
def atexit_coverage():
cov.stop()
cov.save()
atexit.register(atexit_coverage)
cov.start()
'''
ANSIBALLZ_COVERAGE_CHECK_TEMPLATE = '''
try:
if PY3:
import importlib.util
if importlib.util.find_spec('coverage') is None:
raise ImportError
else:
import imp
imp.find_module('coverage')
except ImportError:
print('{"msg": "Could not find `coverage` module.", "failed": true}')
sys.exit(1)
'''
ANSIBALLZ_RLIMIT_TEMPLATE = '''
import resource
existing_soft, existing_hard = resource.getrlimit(resource.RLIMIT_NOFILE)
# adjust soft limit subject to existing hard limit
requested_soft = min(existing_hard, %(rlimit_nofile)d)
if requested_soft != existing_soft:
try:
resource.setrlimit(resource.RLIMIT_NOFILE, (requested_soft, existing_hard))
except ValueError:
# some platforms (eg macOS) lie about their hard limit
pass
'''
def _strip_comments(source):
# Strip comments and blank lines from the wrapper
buf = []
for line in source.splitlines():
l = line.strip()
if not l or l.startswith(u'#'):
continue
buf.append(line)
return u'\n'.join(buf)
if C.DEFAULT_KEEP_REMOTE_FILES:
# Keep comments when KEEP_REMOTE_FILES is set. That way users will see
# the comments with some nice usage instructions
ACTIVE_ANSIBALLZ_TEMPLATE = ANSIBALLZ_TEMPLATE
else:
# ANSIBALLZ_TEMPLATE stripped of comments for smaller over the wire size
ACTIVE_ANSIBALLZ_TEMPLATE = _strip_comments(ANSIBALLZ_TEMPLATE)
# dirname(dirname(dirname(site-packages/ansible/executor/module_common.py) == site-packages
# Do this instead of getting site-packages from distutils.sysconfig so we work when we
# haven't been installed
site_packages = os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
CORE_LIBRARY_PATH_RE = re.compile(r'%s/(?P<path>ansible/modules/.*)\.(py|ps1)$' % site_packages)
COLLECTION_PATH_RE = re.compile(r'/(?P<path>ansible_collections/[^/]+/[^/]+/plugins/modules/.*)\.(py|ps1)$')
# Detect new-style Python modules by looking for required imports:
# import ansible_collections.[my_ns.my_col.plugins.module_utils.my_module_util]
# from ansible_collections.[my_ns.my_col.plugins.module_utils import my_module_util]
# import ansible.module_utils[.basic]
# from ansible.module_utils[ import basic]
# from ansible.module_utils[.basic import AnsibleModule]
# from ..module_utils[ import basic]
# from ..module_utils[.basic import AnsibleModule]
NEW_STYLE_PYTHON_MODULE_RE = re.compile(
# Relative imports
br'(?:from +\.{2,} *module_utils.* +import |'
# Collection absolute imports:
br'from +ansible_collections\.[^.]+\.[^.]+\.plugins\.module_utils.* +import |'
br'import +ansible_collections\.[^.]+\.[^.]+\.plugins\.module_utils.*|'
# Core absolute imports
br'from +ansible\.module_utils.* +import |'
br'import +ansible\.module_utils\.)'
)
class ModuleDepFinder(ast.NodeVisitor):
def __init__(self, module_fqn, *args, **kwargs):
"""
Walk the ast tree for the python module.
:arg module_fqn: The fully qualified name to reach this module in dotted notation.
example: ansible.module_utils.basic
Save submodule[.submoduleN][.identifier] into self.submodules
when they are from ansible.module_utils or ansible_collections packages
self.submodules will end up with tuples like:
- ('ansible', 'module_utils', 'basic',)
- ('ansible', 'module_utils', 'urls', 'fetch_url')
- ('ansible', 'module_utils', 'database', 'postgres')
- ('ansible', 'module_utils', 'database', 'postgres', 'quote')
- ('ansible', 'module_utils', 'database', 'postgres', 'quote')
- ('ansible_collections', 'my_ns', 'my_col', 'plugins', 'module_utils', 'foo')
It's up to calling code to determine whether the final element of the
tuple are module names or something else (function, class, or variable names)
.. seealso:: :python3:class:`ast.NodeVisitor`
"""
super(ModuleDepFinder, self).__init__(*args, **kwargs)
self.submodules = set()
self.module_fqn = module_fqn
def visit_Import(self, node):
"""
Handle import ansible.module_utils.MODLIB[.MODLIBn] [as asname]
We save these as interesting submodules when the imported library is in ansible.module_utils
or ansible.collections
"""
for alias in node.names:
if (alias.name.startswith('ansible.module_utils.') or
alias.name.startswith('ansible_collections.')):
py_mod = tuple(alias.name.split('.'))
self.submodules.add(py_mod)
self.generic_visit(node)
def visit_ImportFrom(self, node):
"""
Handle from ansible.module_utils.MODLIB import [.MODLIBn] [as asname]
Also has to handle relative imports
We save these as interesting submodules when the imported library is in ansible.module_utils
or ansible.collections
"""
# FIXME: These should all get skipped:
# from ansible.executor import module_common
# from ...executor import module_common
# from ... import executor (Currently it gives a non-helpful error)
if node.level > 0:
if self.module_fqn:
parts = tuple(self.module_fqn.split('.'))
if node.module:
# relative import: from .module import x
node_module = '.'.join(parts[:-node.level] + (node.module,))
else:
# relative import: from . import x
node_module = '.'.join(parts[:-node.level])
else:
# fall back to an absolute import
node_module = node.module
else:
# absolute import: from module import x
node_module = node.module
# Specialcase: six is a special case because of its
# import logic
py_mod = None
if node.names[0].name == '_six':
self.submodules.add(('_six',))
elif node_module.startswith('ansible.module_utils'):
# from ansible.module_utils.MODULE1[.MODULEn] import IDENTIFIER [as asname]
# from ansible.module_utils.MODULE1[.MODULEn] import MODULEn+1 [as asname]
# from ansible.module_utils.MODULE1[.MODULEn] import MODULEn+1 [,IDENTIFIER] [as asname]
# from ansible.module_utils import MODULE1 [,MODULEn] [as asname]
py_mod = tuple(node_module.split('.'))
elif node_module.startswith('ansible_collections.'):
if node_module.endswith('plugins.module_utils') or '.plugins.module_utils.' in node_module:
# from ansible_collections.ns.coll.plugins.module_utils import MODULE [as aname] [,MODULE2] [as aname]
# from ansible_collections.ns.coll.plugins.module_utils.MODULE import IDENTIFIER [as aname]
# FIXME: Unhandled cornercase (needs to be ignored):
# from ansible_collections.ns.coll.plugins.[!module_utils].[FOO].plugins.module_utils import IDENTIFIER
py_mod = tuple(node_module.split('.'))
else:
# Not from module_utils so ignore. for instance:
# from ansible_collections.ns.coll.plugins.lookup import IDENTIFIER
pass
if py_mod:
for alias in node.names:
self.submodules.add(py_mod + (alias.name,))
self.generic_visit(node)
def _slurp(path):
if not os.path.exists(path):
raise AnsibleError("imported module support code does not exist at %s" % os.path.abspath(path))
with open(path, 'rb') as fd:
data = fd.read()
return data
def _get_shebang(interpreter, task_vars, templar, args=tuple()):
"""
Note not stellar API:
Returns None instead of always returning a shebang line. Doing it this
way allows the caller to decide to use the shebang it read from the
file rather than trust that we reformatted what they already have
correctly.
"""
interpreter_name = os.path.basename(interpreter).strip()
# FUTURE: add logical equivalence for python3 in the case of py3-only modules
# check for first-class interpreter config
interpreter_config_key = "INTERPRETER_%s" % interpreter_name.upper()
if C.config.get_configuration_definitions().get(interpreter_config_key):
# a config def exists for this interpreter type; consult config for the value
interpreter_out = C.config.get_config_value(interpreter_config_key, variables=task_vars)
discovered_interpreter_config = u'discovered_interpreter_%s' % interpreter_name
interpreter_out = templar.template(interpreter_out.strip())
facts_from_task_vars = task_vars.get('ansible_facts', {})
# handle interpreter discovery if requested
if interpreter_out in ['auto', 'auto_legacy', 'auto_silent', 'auto_legacy_silent']:
if discovered_interpreter_config not in facts_from_task_vars:
# interpreter discovery is desired, but has not been run for this host
raise InterpreterDiscoveryRequiredError("interpreter discovery needed",
interpreter_name=interpreter_name,
discovery_mode=interpreter_out)
else:
interpreter_out = facts_from_task_vars[discovered_interpreter_config]
else:
# a config def does not exist for this interpreter type; consult vars for a possible direct override
interpreter_config = u'ansible_%s_interpreter' % interpreter_name
if interpreter_config not in task_vars:
return None, interpreter
interpreter_out = templar.template(task_vars[interpreter_config].strip())
shebang = u'#!' + interpreter_out
if args:
shebang = shebang + u' ' + u' '.join(args)
return shebang, interpreter_out
class ModuleInfo:
def __init__(self, name, paths):
self.py_src = False
self.pkg_dir = False
path = None
if imp is None:
# don't pretend this is a top-level module, prefix the rest of the namespace
self._info = info = importlib.machinery.PathFinder.find_spec('ansible.module_utils.' + name, paths)
if info is not None:
self.py_src = os.path.splitext(info.origin)[1] in importlib.machinery.SOURCE_SUFFIXES
self.pkg_dir = info.origin.endswith('/__init__.py')
path = info.origin
else:
raise ImportError("No module named '%s'" % name)
else:
self._info = info = imp.find_module(name, paths)
self.py_src = info[2][2] == imp.PY_SOURCE
self.pkg_dir = info[2][2] == imp.PKG_DIRECTORY
if self.pkg_dir:
path = os.path.join(info[1], '__init__.py')
else:
path = info[1]
self.path = path
def get_source(self):
if imp and self.py_src:
try:
return self._info[0].read()
finally:
self._info[0].close()
return _slurp(self.path)
def __repr__(self):
return 'ModuleInfo: py_src=%s, pkg_dir=%s, path=%s' % (self.py_src, self.pkg_dir, self.path)
class CollectionModuleInfo(ModuleInfo):
def __init__(self, name, pkg):
self._mod_name = name
self.py_src = True
self.pkg_dir = False
split_name = pkg.split('.')
split_name.append(name)
if len(split_name) < 5 or split_name[0] != 'ansible_collections' or split_name[3] != 'plugins' or split_name[4] != 'module_utils':
raise ValueError('must search for something beneath a collection module_utils, not {0}.{1}'.format(to_native(pkg), to_native(name)))
# NB: we can't use pkgutil.get_data safely here, since we don't want to import/execute package/module code on
# the controller while analyzing/assembling the module, so we'll have to manually import the collection's
# Python package to locate it (import root collection, reassemble resource path beneath, fetch source)
# FIXME: handle MU redirection logic here
collection_pkg_name = '.'.join(split_name[0:3])
resource_base_path = os.path.join(*split_name[3:])
# look for package_dir first, then module
self._src = pkgutil.get_data(collection_pkg_name, to_native(os.path.join(resource_base_path, '__init__.py')))
if self._src is not None: # empty string is OK
return
self._src = pkgutil.get_data(collection_pkg_name, to_native(resource_base_path + '.py'))
if not self._src:
raise ImportError('unable to load collection-hosted module_util'
' {0}.{1}'.format(to_native(pkg), to_native(name)))
def get_source(self):
return self._src
class InternalRedirectModuleInfo(ModuleInfo):
def __init__(self, name, full_name):
self.pkg_dir = None
self._original_name = full_name
self.path = full_name.replace('.', '/') + '.py'
collection_meta = _get_collection_metadata('ansible.builtin')
redirect = collection_meta.get('plugin_routing', {}).get('module_utils', {}).get(name, {}).get('redirect', None)
if not redirect:
raise ImportError('no redirect found for {0}'.format(name))
self._redirect = redirect
self.py_src = True
self._shim_src = """
import sys
import {1} as mod
sys.modules['{0}'] = mod
""".format(self._original_name, self._redirect)
def get_source(self):
return self._shim_src
def recursive_finder(name, module_fqn, data, py_module_names, py_module_cache, zf):
"""
Using ModuleDepFinder, make sure we have all of the module_utils files that
the module and its module_utils files needs.
:arg name: Name of the python module we're examining
:arg module_fqn: Fully qualified name of the python module we're scanning
:arg py_module_names: set of the fully qualified module names represented as a tuple of their
FQN with __init__ appended if the module is also a python package). Presence of a FQN in
this set means that we've already examined it for module_util deps.
:arg py_module_cache: map python module names (represented as a tuple of their FQN with __init__
appended if the module is also a python package) to a tuple of the code in the module and
the pathname the module would have inside of a Python toplevel (like site-packages)
:arg zf: An open :python:class:`zipfile.ZipFile` object that holds the Ansible module payload
which we're assembling
"""
# Parse the module and find the imports of ansible.module_utils
try:
tree = ast.parse(data)
except (SyntaxError, IndentationError) as e:
raise AnsibleError("Unable to import %s due to %s" % (name, e.msg))
finder = ModuleDepFinder(module_fqn)
finder.visit(tree)
#
# Determine what imports that we've found are modules (vs class, function.
# variable names) for packages
#
module_utils_paths = [p for p in module_utils_loader._get_paths(subdirs=False) if os.path.isdir(p)]
# FIXME: Do we still need this? It feels like module-utils_loader should include
# _MODULE_UTILS_PATH
module_utils_paths.append(_MODULE_UTILS_PATH)
normalized_modules = set()
# Loop through the imports that we've found to normalize them
# Exclude paths that match with paths we've already processed
# (Have to exclude them a second time once the paths are processed)
for py_module_name in finder.submodules.difference(py_module_names):
module_info = None
if py_module_name[0:3] == ('ansible', 'module_utils', 'six'):
# Special case the python six library because it messes with the
# import process in an incompatible way
module_info = ModuleInfo('six', module_utils_paths)
py_module_name = ('ansible', 'module_utils', 'six')
idx = 0
elif py_module_name[0:3] == ('ansible', 'module_utils', '_six'):
# Special case the python six library because it messes with the
# import process in an incompatible way
module_info = ModuleInfo('_six', [os.path.join(p, 'six') for p in module_utils_paths])
py_module_name = ('ansible', 'module_utils', 'six', '_six')
idx = 0
elif py_module_name[0] == 'ansible_collections':
# FIXME (nitz): replicate module name resolution like below for granular imports
for idx in (1, 2):
if len(py_module_name) < idx:
break
try:
# this is a collection-hosted MU; look it up with pkgutil.get_data()
module_info = CollectionModuleInfo(py_module_name[-idx], '.'.join(py_module_name[:-idx]))
break
except ImportError:
continue
elif py_module_name[0:2] == ('ansible', 'module_utils'):
# Need to remove ansible.module_utils because PluginLoader may find different paths
# for us to look in
relative_module_utils_dir = py_module_name[2:]
# Check whether either the last or the second to last identifier is
# a module name
for idx in (1, 2):
if len(relative_module_utils_dir) < idx:
break
try:
module_info = ModuleInfo(py_module_name[-idx],
[os.path.join(p, *relative_module_utils_dir[:-idx]) for p in module_utils_paths])
break
except ImportError:
# check metadata for redirect, generate stub if present
try:
module_info = InternalRedirectModuleInfo(py_module_name[-idx],
'.'.join(py_module_name[:(None if idx == 1 else -1)]))
break
except ImportError:
continue
else:
# If we get here, it's because of a bug in ModuleDepFinder. If we get a reproducer we
# should then fix ModuleDepFinder
display.warning('ModuleDepFinder improperly found a non-module_utils import %s'
% [py_module_name])
continue
# Could not find the module. Construct a helpful error message.
if module_info is None:
msg = ['Could not find imported module support code for %s. Looked for' % (name,)]
if idx == 2:
msg.append('either %s.py or %s.py' % (py_module_name[-1], py_module_name[-2]))
else:
msg.append(py_module_name[-1])
raise AnsibleError(' '.join(msg))
if isinstance(module_info, CollectionModuleInfo):
if idx == 2:
# We've determined that the last portion was an identifier and
# thus, not part of the module name
py_module_name = py_module_name[:-1]
# HACK: maybe surface collection dirs in here and use existing find_module code?
normalized_name = py_module_name
normalized_data = module_info.get_source()
normalized_path = os.path.join(*py_module_name)
py_module_cache[normalized_name] = (normalized_data, normalized_path)
normalized_modules.add(normalized_name)
# HACK: walk back up the package hierarchy to pick up package inits; this won't do the right thing
# for actual packages yet...
accumulated_pkg_name = []
for pkg in py_module_name[:-1]:
accumulated_pkg_name.append(pkg) # we're accumulating this across iterations
normalized_name = tuple(accumulated_pkg_name[:] + ['__init__']) # extra machinations to get a hashable type (list is not)
if normalized_name not in py_module_cache:
normalized_path = os.path.join(*accumulated_pkg_name)
# HACK: possibly preserve some of the actual package file contents; problematic for extend_paths and others though?
normalized_data = ''
py_module_cache[normalized_name] = (normalized_data, normalized_path)
normalized_modules.add(normalized_name)
else:
# Found a byte compiled file rather than source. We cannot send byte
# compiled over the wire as the python version might be different.
# imp.find_module seems to prefer to return source packages so we just
# error out if imp.find_module returns byte compiled files (This is
# fragile as it depends on undocumented imp.find_module behaviour)
if not module_info.pkg_dir and not module_info.py_src:
msg = ['Could not find python source for imported module support code for %s. Looked for' % name]
if idx == 2:
msg.append('either %s.py or %s.py' % (py_module_name[-1], py_module_name[-2]))
else:
msg.append(py_module_name[-1])
raise AnsibleError(' '.join(msg))
if idx == 2:
# We've determined that the last portion was an identifier and
# thus, not part of the module name
py_module_name = py_module_name[:-1]
# If not already processed then we've got work to do
# If not in the cache, then read the file into the cache
# We already have a file handle for the module open so it makes
# sense to read it now
if py_module_name not in py_module_cache:
if module_info.pkg_dir:
# Read the __init__.py instead of the module file as this is
# a python package
normalized_name = py_module_name + ('__init__',)
if normalized_name not in py_module_names:
normalized_data = module_info.get_source()
py_module_cache[normalized_name] = (normalized_data, module_info.path)
normalized_modules.add(normalized_name)
else:
normalized_name = py_module_name
if normalized_name not in py_module_names:
normalized_data = module_info.get_source()
py_module_cache[normalized_name] = (normalized_data, module_info.path)
normalized_modules.add(normalized_name)
#
# Make sure that all the packages that this module is a part of
# are also added
#
for i in range(1, len(py_module_name)):
py_pkg_name = py_module_name[:-i] + ('__init__',)
if py_pkg_name not in py_module_names:
# Need to remove ansible.module_utils because PluginLoader may find
# different paths for us to look in
relative_module_utils = py_pkg_name[2:]
pkg_dir_info = ModuleInfo(relative_module_utils[-1],
[os.path.join(p, *relative_module_utils[:-1]) for p in module_utils_paths])
normalized_modules.add(py_pkg_name)
py_module_cache[py_pkg_name] = (pkg_dir_info.get_source(), pkg_dir_info.path)
# FIXME: Currently the AnsiBallZ wrapper monkeypatches module args into a global
# variable in basic.py. If a module doesn't import basic.py, then the AnsiBallZ wrapper will
# traceback when it tries to monkypatch. So, for now, we have to unconditionally include
# basic.py.
#
# In the future we need to change the wrapper to monkeypatch the args into a global variable in
# their own, separate python module. That way we won't require basic.py. Modules which don't
# want basic.py can import that instead. AnsibleModule will need to change to import the vars
# from the separate python module and mirror the args into its global variable for backwards
# compatibility.
if ('ansible', 'module_utils', 'basic',) not in py_module_names:
pkg_dir_info = ModuleInfo('basic', module_utils_paths)
normalized_modules.add(('ansible', 'module_utils', 'basic',))
py_module_cache[('ansible', 'module_utils', 'basic',)] = (pkg_dir_info.get_source(), pkg_dir_info.path)
# End of AnsiballZ hack
#
# iterate through all of the ansible.module_utils* imports that we haven't
# already checked for new imports
#
# set of modules that we haven't added to the zipfile
unprocessed_py_module_names = normalized_modules.difference(py_module_names)
for py_module_name in unprocessed_py_module_names:
py_module_path = os.path.join(*py_module_name)
py_module_file_name = '%s.py' % py_module_path
zf.writestr(py_module_file_name, py_module_cache[py_module_name][0])
display.vvvvv("Using module_utils file %s" % py_module_cache[py_module_name][1])
# Add the names of the files we're scheduling to examine in the loop to
# py_module_names so that we don't re-examine them in the next pass
# through recursive_finder()
py_module_names.update(unprocessed_py_module_names)
for py_module_file in unprocessed_py_module_names:
next_fqn = '.'.join(py_module_file)
recursive_finder(py_module_file[-1], next_fqn, py_module_cache[py_module_file][0],
py_module_names, py_module_cache, zf)
# Save memory; the file won't have to be read again for this ansible module.
del py_module_cache[py_module_file]
def _is_binary(b_module_data):
textchars = bytearray(set([7, 8, 9, 10, 12, 13, 27]) | set(range(0x20, 0x100)) - set([0x7f]))
start = b_module_data[:1024]
return bool(start.translate(None, textchars))
def _get_ansible_module_fqn(module_path):
"""
Get the fully qualified name for an ansible module based on its pathname
remote_module_fqn is the fully qualified name. Like ansible.modules.system.ping
Or ansible_collections.Namespace.Collection_name.plugins.modules.ping
.. warning:: This function is for ansible modules only. It won't work for other things
(non-module plugins, etc)
"""
remote_module_fqn = None
# Is this a core module?
match = CORE_LIBRARY_PATH_RE.search(module_path)
if not match:
# Is this a module in a collection?
match = COLLECTION_PATH_RE.search(module_path)
# We can tell the FQN for core modules and collection modules
if match:
path = match.group('path')
if '.' in path:
# FQNs must be valid as python identifiers. This sanity check has failed.
# we could check other things as well
raise ValueError('Module name (or path) was not a valid python identifier')
remote_module_fqn = '.'.join(path.split('/'))
else:
# Currently we do not handle modules in roles so we can end up here for that reason
raise ValueError("Unable to determine module's fully qualified name")
return remote_module_fqn
def _add_module_to_zip(zf, remote_module_fqn, b_module_data):
"""Add a module from ansible or from an ansible collection into the module zip"""
module_path_parts = remote_module_fqn.split('.')
# Write the module
module_path = '/'.join(module_path_parts) + '.py'
zf.writestr(module_path, b_module_data)
# Write the __init__.py's necessary to get there
if module_path_parts[0] == 'ansible':
# The ansible namespace is setup as part of the module_utils setup...
start = 2
existing_paths = frozenset()
else:
# ... but ansible_collections and other toplevels are not
start = 1
existing_paths = frozenset(zf.namelist())
for idx in range(start, len(module_path_parts)):
package_path = '/'.join(module_path_parts[:idx]) + '/__init__.py'
# If a collections module uses module_utils from a collection then most packages will have already been added by recursive_finder.
if package_path in existing_paths:
continue
# Note: We don't want to include more than one ansible module in a payload at this time
# so no need to fill the __init__.py with namespace code
zf.writestr(package_path, b'')
def _find_module_utils(module_name, b_module_data, module_path, module_args, task_vars, templar, module_compression, async_timeout, become,
become_method, become_user, become_password, become_flags, environment):
"""
Given the source of the module, convert it to a Jinja2 template to insert
module code and return whether it's a new or old style module.
"""
module_substyle = module_style = 'old'
# module_style is something important to calling code (ActionBase). It
# determines how arguments are formatted (json vs k=v) and whether
# a separate arguments file needs to be sent over the wire.
# module_substyle is extra information that's useful internally. It tells
# us what we have to look to substitute in the module files and whether
# we're using module replacer or ansiballz to format the module itself.
if _is_binary(b_module_data):
module_substyle = module_style = 'binary'
elif REPLACER in b_module_data:
# Do REPLACER before from ansible.module_utils because we need make sure
# we substitute "from ansible.module_utils basic" for REPLACER
module_style = 'new'
module_substyle = 'python'
b_module_data = b_module_data.replace(REPLACER, b'from ansible.module_utils.basic import *')
elif NEW_STYLE_PYTHON_MODULE_RE.search(b_module_data):
module_style = 'new'
module_substyle = 'python'
elif REPLACER_WINDOWS in b_module_data:
module_style = 'new'
module_substyle = 'powershell'
b_module_data = b_module_data.replace(REPLACER_WINDOWS, b'#Requires -Module Ansible.ModuleUtils.Legacy')
elif re.search(b'#Requires -Module', b_module_data, re.IGNORECASE) \
or re.search(b'#Requires -Version', b_module_data, re.IGNORECASE)\
or re.search(b'#AnsibleRequires -OSVersion', b_module_data, re.IGNORECASE) \
or re.search(b'#AnsibleRequires -Powershell', b_module_data, re.IGNORECASE) \
or re.search(b'#AnsibleRequires -CSharpUtil', b_module_data, re.IGNORECASE):
module_style = 'new'
module_substyle = 'powershell'
elif REPLACER_JSONARGS in b_module_data:
module_style = 'new'
module_substyle = 'jsonargs'
elif b'WANT_JSON' in b_module_data:
module_substyle = module_style = 'non_native_want_json'
shebang = None
# Neither old-style, non_native_want_json nor binary modules should be modified
# except for the shebang line (Done by modify_module)
if module_style in ('old', 'non_native_want_json', 'binary'):
return b_module_data, module_style, shebang
output = BytesIO()
py_module_names = set()
try:
remote_module_fqn = _get_ansible_module_fqn(module_path)
except ValueError:
# Modules in roles currently are not found by the fqn heuristic so we
# fallback to this. This means that relative imports inside a module from
# a role may fail. Absolute imports should be used for future-proofness.
# People should start writing collections instead of modules in roles so we
# may never fix this
display.debug('ANSIBALLZ: Could not determine module FQN')
remote_module_fqn = 'ansible.modules.%s' % module_name
if module_substyle == 'python':
params = dict(ANSIBLE_MODULE_ARGS=module_args,)
try:
python_repred_params = repr(json.dumps(params))
except TypeError as e:
raise AnsibleError("Unable to pass options to module, they must be JSON serializable: %s" % to_native(e))
try:
compression_method = getattr(zipfile, module_compression)
except AttributeError:
display.warning(u'Bad module compression string specified: %s. Using ZIP_STORED (no compression)' % module_compression)
compression_method = zipfile.ZIP_STORED
lookup_path = os.path.join(C.DEFAULT_LOCAL_TMP, 'ansiballz_cache')
cached_module_filename = os.path.join(lookup_path, "%s-%s" % (module_name, module_compression))
zipdata = None
# Optimization -- don't lock if the module has already been cached
if os.path.exists(cached_module_filename):
display.debug('ANSIBALLZ: using cached module: %s' % cached_module_filename)
with open(cached_module_filename, 'rb') as module_data:
zipdata = module_data.read()
else:
if module_name in action_write_locks.action_write_locks:
display.debug('ANSIBALLZ: Using lock for %s' % module_name)
lock = action_write_locks.action_write_locks[module_name]
else:
# If the action plugin directly invokes the module (instead of
# going through a strategy) then we don't have a cross-process
# Lock specifically for this module. Use the "unexpected
# module" lock instead
display.debug('ANSIBALLZ: Using generic lock for %s' % module_name)
lock = action_write_locks.action_write_locks[None]
display.debug('ANSIBALLZ: Acquiring lock')
with lock:
display.debug('ANSIBALLZ: Lock acquired: %s' % id(lock))
# Check that no other process has created this while we were
# waiting for the lock
if not os.path.exists(cached_module_filename):
display.debug('ANSIBALLZ: Creating module')
# Create the module zip data
zipoutput = BytesIO()
zf = zipfile.ZipFile(zipoutput, mode='w', compression=compression_method)
# py_module_cache maps python module names to a tuple of the code in the module
# and the pathname to the module. See the recursive_finder() documentation for
# more info.
# Here we pre-load it with modules which we create without bothering to
# read from actual files (In some cases, these need to differ from what ansible
# ships because they're namespace packages in the module)
py_module_cache = {
('ansible', '__init__',): (
b'from pkgutil import extend_path\n'
b'__path__=extend_path(__path__,__name__)\n'
b'__version__="' + to_bytes(__version__) +
b'"\n__author__="' + to_bytes(__author__) + b'"\n',
'ansible/__init__.py'),
('ansible', 'module_utils', '__init__',): (
b'from pkgutil import extend_path\n'
b'__path__=extend_path(__path__,__name__)\n',
'ansible/module_utils/__init__.py')}
for (py_module_name, (file_data, filename)) in py_module_cache.items():
zf.writestr(filename, file_data)
# py_module_names keeps track of which modules we've already scanned for
# module_util dependencies
py_module_names.add(py_module_name)
# Returning the ast tree is a temporary hack. We need to know if the module has
# a main() function or not as we are deprecating new-style modules without
# main(). Because parsing the ast is expensive, return it from recursive_finder
# instead of reparsing. Once the deprecation is over and we remove that code,
# also remove returning of the ast tree.
recursive_finder(module_name, remote_module_fqn, b_module_data, py_module_names,
py_module_cache, zf)
display.debug('ANSIBALLZ: Writing module into payload')
_add_module_to_zip(zf, remote_module_fqn, b_module_data)
zf.close()
zipdata = base64.b64encode(zipoutput.getvalue())
# Write the assembled module to a temp file (write to temp
# so that no one looking for the file reads a partially
# written file)
if not os.path.exists(lookup_path):
# Note -- if we have a global function to setup, that would
# be a better place to run this
os.makedirs(lookup_path)
display.debug('ANSIBALLZ: Writing module')
with open(cached_module_filename + '-part', 'wb') as f:
f.write(zipdata)
# Rename the file into its final position in the cache so
# future users of this module can read it off the
# filesystem instead of constructing from scratch.
display.debug('ANSIBALLZ: Renaming module')
os.rename(cached_module_filename + '-part', cached_module_filename)
display.debug('ANSIBALLZ: Done creating module')
if zipdata is None:
display.debug('ANSIBALLZ: Reading module after lock')
# Another process wrote the file while we were waiting for
# the write lock. Go ahead and read the data from disk
# instead of re-creating it.
try:
with open(cached_module_filename, 'rb') as f:
zipdata = f.read()
except IOError:
raise AnsibleError('A different worker process failed to create module file. '
'Look at traceback for that process for debugging information.')
zipdata = to_text(zipdata, errors='surrogate_or_strict')
shebang, interpreter = _get_shebang(u'/usr/bin/python', task_vars, templar)
if shebang is None:
shebang = u'#!/usr/bin/python'
# FUTURE: the module cache entry should be invalidated if we got this value from a host-dependent source
rlimit_nofile = C.config.get_config_value('PYTHON_MODULE_RLIMIT_NOFILE', variables=task_vars)
if not isinstance(rlimit_nofile, int):
rlimit_nofile = int(templar.template(rlimit_nofile))
if rlimit_nofile:
rlimit = ANSIBALLZ_RLIMIT_TEMPLATE % dict(
rlimit_nofile=rlimit_nofile,
)
else:
rlimit = ''
coverage_config = os.environ.get('_ANSIBLE_COVERAGE_CONFIG')
if coverage_config:
coverage_output = os.environ['_ANSIBLE_COVERAGE_OUTPUT']
if coverage_output:
# Enable code coverage analysis of the module.
# This feature is for internal testing and may change without notice.
coverage = ANSIBALLZ_COVERAGE_TEMPLATE % dict(
coverage_config=coverage_config,
coverage_output=coverage_output,
)
else:
# Verify coverage is available without importing it.
# This will detect when a module would fail with coverage enabled with minimal overhead.
coverage = ANSIBALLZ_COVERAGE_CHECK_TEMPLATE
else:
coverage = ''
now = datetime.datetime.utcnow()
output.write(to_bytes(ACTIVE_ANSIBALLZ_TEMPLATE % dict(
zipdata=zipdata,
ansible_module=module_name,
module_fqn=remote_module_fqn,
params=python_repred_params,
shebang=shebang,
coding=ENCODING_STRING,
year=now.year,
month=now.month,
day=now.day,
hour=now.hour,
minute=now.minute,
second=now.second,
coverage=coverage,
rlimit=rlimit,
)))
b_module_data = output.getvalue()
elif module_substyle == 'powershell':
# Powershell/winrm don't actually make use of shebang so we can
# safely set this here. If we let the fallback code handle this
# it can fail in the presence of the UTF8 BOM commonly added by
# Windows text editors
shebang = u'#!powershell'
# create the common exec wrapper payload and set that as the module_data
# bytes
b_module_data = ps_manifest._create_powershell_wrapper(
b_module_data, module_path, module_args, environment,
async_timeout, become, become_method, become_user, become_password,
become_flags, module_substyle, task_vars, remote_module_fqn
)
elif module_substyle == 'jsonargs':
module_args_json = to_bytes(json.dumps(module_args))
# these strings could be included in a third-party module but
# officially they were included in the 'basic' snippet for new-style
# python modules (which has been replaced with something else in
# ansiballz) If we remove them from jsonargs-style module replacer
# then we can remove them everywhere.
python_repred_args = to_bytes(repr(module_args_json))
b_module_data = b_module_data.replace(REPLACER_VERSION, to_bytes(repr(__version__)))
b_module_data = b_module_data.replace(REPLACER_COMPLEX, python_repred_args)
b_module_data = b_module_data.replace(REPLACER_SELINUX, to_bytes(','.join(C.DEFAULT_SELINUX_SPECIAL_FS)))
# The main event -- substitute the JSON args string into the module
b_module_data = b_module_data.replace(REPLACER_JSONARGS, module_args_json)
facility = b'syslog.' + to_bytes(task_vars.get('ansible_syslog_facility', C.DEFAULT_SYSLOG_FACILITY), errors='surrogate_or_strict')
b_module_data = b_module_data.replace(b'syslog.LOG_USER', facility)
return (b_module_data, module_style, shebang)
def modify_module(module_name, module_path, module_args, templar, task_vars=None, module_compression='ZIP_STORED', async_timeout=0, become=False,
become_method=None, become_user=None, become_password=None, become_flags=None, environment=None):
"""
Used to insert chunks of code into modules before transfer rather than
doing regular python imports. This allows for more efficient transfer in
a non-bootstrapping scenario by not moving extra files over the wire and
also takes care of embedding arguments in the transferred modules.
This version is done in such a way that local imports can still be
used in the module code, so IDEs don't have to be aware of what is going on.
Example:
from ansible.module_utils.basic import *
... will result in the insertion of basic.py into the module
from the module_utils/ directory in the source tree.
For powershell, this code effectively no-ops, as the exec wrapper requires access to a number of
properties not available here.
"""
task_vars = {} if task_vars is None else task_vars
environment = {} if environment is None else environment
with open(module_path, 'rb') as f:
# read in the module source
b_module_data = f.read()
(b_module_data, module_style, shebang) = _find_module_utils(module_name, b_module_data, module_path, module_args, task_vars, templar, module_compression,
async_timeout=async_timeout, become=become, become_method=become_method,
become_user=become_user, become_password=become_password, become_flags=become_flags,
environment=environment)
if module_style == 'binary':
return (b_module_data, module_style, to_text(shebang, nonstring='passthru'))
elif shebang is None:
b_lines = b_module_data.split(b"\n", 1)
if b_lines[0].startswith(b"#!"):
b_shebang = b_lines[0].strip()
# shlex.split on python-2.6 needs bytes. On python-3.x it needs text
args = shlex.split(to_native(b_shebang[2:], errors='surrogate_or_strict'))
# _get_shebang() takes text strings
args = [to_text(a, errors='surrogate_or_strict') for a in args]
interpreter = args[0]
b_new_shebang = to_bytes(_get_shebang(interpreter, task_vars, templar, args[1:])[0],
errors='surrogate_or_strict', nonstring='passthru')
if b_new_shebang:
b_lines[0] = b_shebang = b_new_shebang
if os.path.basename(interpreter).startswith(u'python'):
b_lines.insert(1, b_ENCODING_STRING)
shebang = to_text(b_shebang, nonstring='passthru', errors='surrogate_or_strict')
else:
# No shebang, assume a binary module?
pass
b_module_data = b"\n".join(b_lines)
return (b_module_data, module_style, shebang)
def get_action_args_with_defaults(action, args, defaults, templar):
tmp_args = {}
module_defaults = {}
# Merge latest defaults into dict, since they are a list of dicts
if isinstance(defaults, list):
for default in defaults:
module_defaults.update(default)
# if I actually have defaults, template and merge
if module_defaults:
module_defaults = templar.template(module_defaults)
# deal with configured group defaults first
if action in C.config.module_defaults_groups:
for group in C.config.module_defaults_groups.get(action, []):
tmp_args.update((module_defaults.get('group/{0}'.format(group)) or {}).copy())
# handle specific action defaults
if action in module_defaults:
tmp_args.update(module_defaults[action].copy())
# direct args override all
tmp_args.update(args)
return tmp_args
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,516 |
Flatcar Container Linux not properly discovered
|
### SUMMARY
[Flatcar Container Linux](https://www.flatcar-linux.org) is not properly discovered by Ansible, especially while setting hostname.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
hostname
##### ANSIBLE VERSION
```paste below
ansible --version
ansible 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['.../.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.2 (default, Feb 28 2020, 00:00:00) [GCC 10.0.1 20200216 (Red Hat 10.0.1-0.8)]
```
##### OS / ENVIRONMENT
Guest OS: Flatcar Container Linux Alpha 2492.0.0
Host: Fedora 32
##### STEPS TO REPRODUCE
1. Clone [image builder](https://github.com/kubernetes-sigs/image-builder).
2. Adjust files for Flatcar.
3. Run `packer build`, which creates Flatcar image, provisions a host.
##### EXPECTED RESULTS
No error
##### ACTUAL RESULTS
Failure like:
```
flatcar-alpha: TASK [sysprep : Set hostname] **************************************************
flatcar-alpha: fatal: [default]: FAILED! => {"changed": false, "msg": "hostname module cannot be used on platform Linux (Flatcar)"}^
```
For info, in Flatcar you can see for example:
```
$ cat /etc/os-release
NAME="Flatcar Container Linux by Kinvolk"
ID=flatcar
ID_LIKE=coreos
VERSION=2492.0.0
VERSION_ID=2492.0.0
BUILD_ID=2020-04-28-2210
PRETTY_NAME="Flatcar Container Linux by Kinvolk 2492.0.0 (Rhyolite)"
ANSI_COLOR="38;5;75"
HOME_URL="https://flatcar-linux.org/"
BUG_REPORT_URL="https://issues.flatcar-linux.org"
FLATCAR_BOARD="amd64-usr"
$ cat /etc/lsb-release
DISTRIB_ID="Flatcar Container Linux by Kinvolk"
DISTRIB_RELEASE=2492.0.0
DISTRIB_CODENAME="Rhyolite"
DISTRIB_DESCRIPTION="Flatcar Container Linux by Kinvolk 2492.0.0 (Rhyolite)"
```
As far as I understand, so far Ansible only supported [CoreOS Container Linux](https://github.com/ansible/ansible/blob/cedfe34619128783d2a799695bd4c53d6adc9dd1/lib/ansible/module_utils/facts/system/distribution.py#L387), which will be soon EOL. Recently there was an attempt to support [Fedora CoreOS or RedHat CoreOS](https://github.com/ansible/ansible/pull/53563), which was not merged. I am not sure if there is any recent progress about that.
So would it be reasonable to simply add new code for Flatcar, just like CoreOS Container Linux?
|
https://github.com/ansible/ansible/issues/69516
|
https://github.com/ansible/ansible/pull/69627
|
d7f61cbc281f4b8eccf7fe67eea5522cb28b52b2
|
598e3392a9597f0214d68882da4f4ca07314ce41
| 2020-05-14T15:36:51Z |
python
| 2020-06-02T13:11:53Z |
changelogs/fragments/69516_flatcar_distribution.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,516 |
Flatcar Container Linux not properly discovered
|
### SUMMARY
[Flatcar Container Linux](https://www.flatcar-linux.org) is not properly discovered by Ansible, especially while setting hostname.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
hostname
##### ANSIBLE VERSION
```paste below
ansible --version
ansible 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['.../.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.2 (default, Feb 28 2020, 00:00:00) [GCC 10.0.1 20200216 (Red Hat 10.0.1-0.8)]
```
##### OS / ENVIRONMENT
Guest OS: Flatcar Container Linux Alpha 2492.0.0
Host: Fedora 32
##### STEPS TO REPRODUCE
1. Clone [image builder](https://github.com/kubernetes-sigs/image-builder).
2. Adjust files for Flatcar.
3. Run `packer build`, which creates Flatcar image, provisions a host.
##### EXPECTED RESULTS
No error
##### ACTUAL RESULTS
Failure like:
```
flatcar-alpha: TASK [sysprep : Set hostname] **************************************************
flatcar-alpha: fatal: [default]: FAILED! => {"changed": false, "msg": "hostname module cannot be used on platform Linux (Flatcar)"}^
```
For info, in Flatcar you can see for example:
```
$ cat /etc/os-release
NAME="Flatcar Container Linux by Kinvolk"
ID=flatcar
ID_LIKE=coreos
VERSION=2492.0.0
VERSION_ID=2492.0.0
BUILD_ID=2020-04-28-2210
PRETTY_NAME="Flatcar Container Linux by Kinvolk 2492.0.0 (Rhyolite)"
ANSI_COLOR="38;5;75"
HOME_URL="https://flatcar-linux.org/"
BUG_REPORT_URL="https://issues.flatcar-linux.org"
FLATCAR_BOARD="amd64-usr"
$ cat /etc/lsb-release
DISTRIB_ID="Flatcar Container Linux by Kinvolk"
DISTRIB_RELEASE=2492.0.0
DISTRIB_CODENAME="Rhyolite"
DISTRIB_DESCRIPTION="Flatcar Container Linux by Kinvolk 2492.0.0 (Rhyolite)"
```
As far as I understand, so far Ansible only supported [CoreOS Container Linux](https://github.com/ansible/ansible/blob/cedfe34619128783d2a799695bd4c53d6adc9dd1/lib/ansible/module_utils/facts/system/distribution.py#L387), which will be soon EOL. Recently there was an attempt to support [Fedora CoreOS or RedHat CoreOS](https://github.com/ansible/ansible/pull/53563), which was not merged. I am not sure if there is any recent progress about that.
So would it be reasonable to simply add new code for Flatcar, just like CoreOS Container Linux?
|
https://github.com/ansible/ansible/issues/69516
|
https://github.com/ansible/ansible/pull/69627
|
d7f61cbc281f4b8eccf7fe67eea5522cb28b52b2
|
598e3392a9597f0214d68882da4f4ca07314ce41
| 2020-05-14T15:36:51Z |
python
| 2020-06-02T13:11:53Z |
hacking/tests/gen_distribution_version_testcase.py
|
#!/usr/bin/env python
"""
This script generated test_cases for test_distribution_version.py.
To do so it outputs the relevant files from /etc/*release, the output of distro.linux_distribution()
and the current ansible_facts regarding the distribution version.
This assumes a working ansible version in the path.
"""
import os.path
import subprocess
import json
import sys
from ansible.module_utils import distro
from ansible.module_utils._text import to_text
filelist = [
'/etc/oracle-release',
'/etc/slackware-version',
'/etc/redhat-release',
'/etc/vmware-release',
'/etc/openwrt_release',
'/etc/system-release',
'/etc/alpine-release',
'/etc/release',
'/etc/arch-release',
'/etc/os-release',
'/etc/SuSE-release',
'/etc/gentoo-release',
'/etc/os-release',
'/etc/lsb-release',
'/etc/altlinux-release',
'/etc/os-release',
'/etc/coreos/update.conf',
'/usr/lib/os-release',
]
fcont = {}
for f in filelist:
if os.path.exists(f):
s = os.path.getsize(f)
if s > 0 and s < 10000:
with open(f) as fh:
fcont[f] = fh.read()
dist = distro.linux_distribution(full_distribution_name=False)
facts = ['distribution', 'distribution_version', 'distribution_release', 'distribution_major_version', 'os_family']
try:
b_ansible_out = subprocess.check_output(
['ansible', 'localhost', '-m', 'setup'])
except subprocess.CalledProcessError as e:
print("ERROR: ansible run failed, output was: \n")
print(e.output)
sys.exit(e.returncode)
ansible_out = to_text(b_ansible_out)
parsed = json.loads(ansible_out[ansible_out.index('{'):])
ansible_facts = {}
for fact in facts:
try:
ansible_facts[fact] = parsed['ansible_facts']['ansible_' + fact]
except Exception:
ansible_facts[fact] = "N/A"
nicename = ansible_facts['distribution'] + ' ' + ansible_facts['distribution_version']
output = {
'name': nicename,
'distro': {
'codename': distro.codename(),
'id': distro.id(),
'name': distro.name(),
'version': distro.version(),
'version_best': distro.version(best=True),
'lsb_release_info': distro.lsb_release_info(),
'os_release_info': distro.os_release_info(),
},
'input': fcont,
'platform.dist': dist,
'result': ansible_facts,
}
print(json.dumps(output, indent=4))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,516 |
Flatcar Container Linux not properly discovered
|
### SUMMARY
[Flatcar Container Linux](https://www.flatcar-linux.org) is not properly discovered by Ansible, especially while setting hostname.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
hostname
##### ANSIBLE VERSION
```paste below
ansible --version
ansible 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['.../.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.2 (default, Feb 28 2020, 00:00:00) [GCC 10.0.1 20200216 (Red Hat 10.0.1-0.8)]
```
##### OS / ENVIRONMENT
Guest OS: Flatcar Container Linux Alpha 2492.0.0
Host: Fedora 32
##### STEPS TO REPRODUCE
1. Clone [image builder](https://github.com/kubernetes-sigs/image-builder).
2. Adjust files for Flatcar.
3. Run `packer build`, which creates Flatcar image, provisions a host.
##### EXPECTED RESULTS
No error
##### ACTUAL RESULTS
Failure like:
```
flatcar-alpha: TASK [sysprep : Set hostname] **************************************************
flatcar-alpha: fatal: [default]: FAILED! => {"changed": false, "msg": "hostname module cannot be used on platform Linux (Flatcar)"}^
```
For info, in Flatcar you can see for example:
```
$ cat /etc/os-release
NAME="Flatcar Container Linux by Kinvolk"
ID=flatcar
ID_LIKE=coreos
VERSION=2492.0.0
VERSION_ID=2492.0.0
BUILD_ID=2020-04-28-2210
PRETTY_NAME="Flatcar Container Linux by Kinvolk 2492.0.0 (Rhyolite)"
ANSI_COLOR="38;5;75"
HOME_URL="https://flatcar-linux.org/"
BUG_REPORT_URL="https://issues.flatcar-linux.org"
FLATCAR_BOARD="amd64-usr"
$ cat /etc/lsb-release
DISTRIB_ID="Flatcar Container Linux by Kinvolk"
DISTRIB_RELEASE=2492.0.0
DISTRIB_CODENAME="Rhyolite"
DISTRIB_DESCRIPTION="Flatcar Container Linux by Kinvolk 2492.0.0 (Rhyolite)"
```
As far as I understand, so far Ansible only supported [CoreOS Container Linux](https://github.com/ansible/ansible/blob/cedfe34619128783d2a799695bd4c53d6adc9dd1/lib/ansible/module_utils/facts/system/distribution.py#L387), which will be soon EOL. Recently there was an attempt to support [Fedora CoreOS or RedHat CoreOS](https://github.com/ansible/ansible/pull/53563), which was not merged. I am not sure if there is any recent progress about that.
So would it be reasonable to simply add new code for Flatcar, just like CoreOS Container Linux?
|
https://github.com/ansible/ansible/issues/69516
|
https://github.com/ansible/ansible/pull/69627
|
d7f61cbc281f4b8eccf7fe67eea5522cb28b52b2
|
598e3392a9597f0214d68882da4f4ca07314ce41
| 2020-05-14T15:36:51Z |
python
| 2020-06-02T13:11:53Z |
lib/ansible/module_utils/facts/system/distribution.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import platform
import re
from ansible.module_utils.common.sys_info import get_distribution, get_distribution_version, \
get_distribution_codename
from ansible.module_utils.facts.utils import get_file_content
from ansible.module_utils.facts.collector import BaseFactCollector
def get_uname(module, flags=('-v')):
if isinstance(flags, str):
flags = flags.split()
command = ['uname']
command.extend(flags)
rc, out, err = module.run_command(command)
if rc == 0:
return out
return None
def _file_exists(path, allow_empty=False):
# not finding the file, exit early
if not os.path.exists(path):
return False
# if just the path needs to exists (ie, it can be empty) we are done
if allow_empty:
return True
# file exists but is empty and we dont allow_empty
if os.path.getsize(path) == 0:
return False
# file exists with some content
return True
class DistributionFiles:
'''has-a various distro file parsers (os-release, etc) and logic for finding the right one.'''
# every distribution name mentioned here, must have one of
# - allowempty == True
# - be listed in SEARCH_STRING
# - have a function get_distribution_DISTNAME implemented
# keep names in sync with Conditionals page of docs
OSDIST_LIST = (
{'path': '/etc/altlinux-release', 'name': 'Altlinux'},
{'path': '/etc/oracle-release', 'name': 'OracleLinux'},
{'path': '/etc/slackware-version', 'name': 'Slackware'},
{'path': '/etc/redhat-release', 'name': 'RedHat'},
{'path': '/etc/vmware-release', 'name': 'VMwareESX', 'allowempty': True},
{'path': '/etc/openwrt_release', 'name': 'OpenWrt'},
{'path': '/etc/system-release', 'name': 'Amazon'},
{'path': '/etc/alpine-release', 'name': 'Alpine'},
{'path': '/etc/arch-release', 'name': 'Archlinux', 'allowempty': True},
{'path': '/etc/os-release', 'name': 'Archlinux'},
{'path': '/etc/os-release', 'name': 'SUSE'},
{'path': '/etc/SuSE-release', 'name': 'SUSE'},
{'path': '/etc/gentoo-release', 'name': 'Gentoo'},
{'path': '/etc/os-release', 'name': 'Debian'},
{'path': '/etc/lsb-release', 'name': 'Debian'},
{'path': '/etc/lsb-release', 'name': 'Mandriva'},
{'path': '/etc/sourcemage-release', 'name': 'SMGL'},
{'path': '/usr/lib/os-release', 'name': 'ClearLinux'},
{'path': '/etc/coreos/update.conf', 'name': 'Coreos'},
{'path': '/etc/os-release', 'name': 'NA'},
)
SEARCH_STRING = {
'OracleLinux': 'Oracle Linux',
'RedHat': 'Red Hat',
'Altlinux': 'ALT',
'SMGL': 'Source Mage GNU/Linux',
}
# We can't include this in SEARCH_STRING because a name match on its keys
# causes a fallback to using the first whitespace separated item from the file content
# as the name. For os-release, that is in form 'NAME=Arch'
OS_RELEASE_ALIAS = {
'Archlinux': 'Arch Linux'
}
STRIP_QUOTES = r'\'\"\\'
def __init__(self, module):
self.module = module
def _get_file_content(self, path):
return get_file_content(path)
def _get_dist_file_content(self, path, allow_empty=False):
# cant find that dist file or it is incorrectly empty
if not _file_exists(path, allow_empty=allow_empty):
return False, None
data = self._get_file_content(path)
return True, data
def _parse_dist_file(self, name, dist_file_content, path, collected_facts):
dist_file_dict = {}
dist_file_content = dist_file_content.strip(DistributionFiles.STRIP_QUOTES)
if name in self.SEARCH_STRING:
# look for the distribution string in the data and replace according to RELEASE_NAME_MAP
# only the distribution name is set, the version is assumed to be correct from distro.linux_distribution()
if self.SEARCH_STRING[name] in dist_file_content:
# this sets distribution=RedHat if 'Red Hat' shows up in data
dist_file_dict['distribution'] = name
dist_file_dict['distribution_file_search_string'] = self.SEARCH_STRING[name]
else:
# this sets distribution to what's in the data, e.g. CentOS, Scientific, ...
dist_file_dict['distribution'] = dist_file_content.split()[0]
return True, dist_file_dict
if name in self.OS_RELEASE_ALIAS:
if self.OS_RELEASE_ALIAS[name] in dist_file_content:
dist_file_dict['distribution'] = name
return True, dist_file_dict
return False, dist_file_dict
# call a dedicated function for parsing the file content
# TODO: replace with a map or a class
try:
# FIXME: most of these dont actually look at the dist file contents, but random other stuff
distfunc_name = 'parse_distribution_file_' + name
distfunc = getattr(self, distfunc_name)
parsed, dist_file_dict = distfunc(name, dist_file_content, path, collected_facts)
return parsed, dist_file_dict
except AttributeError as exc:
print('exc: %s' % exc)
# this should never happen, but if it does fail quietly and not with a traceback
return False, dist_file_dict
return True, dist_file_dict
# to debug multiple matching release files, one can use:
# self.facts['distribution_debug'].append({path + ' ' + name:
# (parsed,
# self.facts['distribution'],
# self.facts['distribution_version'],
# self.facts['distribution_release'],
# )})
def _guess_distribution(self):
# try to find out which linux distribution this is
dist = (get_distribution(), get_distribution_version(), get_distribution_codename())
distribution_guess = {
'distribution': dist[0] or 'NA',
'distribution_version': dist[1] or 'NA',
# distribution_release can be the empty string
'distribution_release': 'NA' if dist[2] is None else dist[2]
}
distribution_guess['distribution_major_version'] = distribution_guess['distribution_version'].split('.')[0] or 'NA'
return distribution_guess
def process_dist_files(self):
# Try to handle the exceptions now ...
# self.facts['distribution_debug'] = []
dist_file_facts = {}
dist_guess = self._guess_distribution()
dist_file_facts.update(dist_guess)
for ddict in self.OSDIST_LIST:
name = ddict['name']
path = ddict['path']
allow_empty = ddict.get('allowempty', False)
has_dist_file, dist_file_content = self._get_dist_file_content(path, allow_empty=allow_empty)
# but we allow_empty. For example, ArchLinux with an empty /etc/arch-release and a
# /etc/os-release with a different name
if has_dist_file and allow_empty:
dist_file_facts['distribution'] = name
dist_file_facts['distribution_file_path'] = path
dist_file_facts['distribution_file_variety'] = name
break
if not has_dist_file:
# keep looking
continue
parsed_dist_file, parsed_dist_file_facts = self._parse_dist_file(name, dist_file_content, path, dist_file_facts)
# finally found the right os dist file and were able to parse it
if parsed_dist_file:
dist_file_facts['distribution'] = name
dist_file_facts['distribution_file_path'] = path
# distribution and file_variety are the same here, but distribution
# will be changed/mapped to a more specific name.
# ie, dist=Fedora, file_variety=RedHat
dist_file_facts['distribution_file_variety'] = name
dist_file_facts['distribution_file_parsed'] = parsed_dist_file
dist_file_facts.update(parsed_dist_file_facts)
break
return dist_file_facts
# TODO: FIXME: split distro file parsing into its own module or class
def parse_distribution_file_Slackware(self, name, data, path, collected_facts):
slackware_facts = {}
if 'Slackware' not in data:
return False, slackware_facts # TODO: remove
slackware_facts['distribution'] = name
version = re.findall(r'\w+[.]\w+', data)
if version:
slackware_facts['distribution_version'] = version[0]
return True, slackware_facts
def parse_distribution_file_Amazon(self, name, data, path, collected_facts):
amazon_facts = {}
if 'Amazon' not in data:
return False, amazon_facts
amazon_facts['distribution'] = 'Amazon'
version = [n for n in data.split() if n.isdigit()]
version = version[0] if version else 'NA'
amazon_facts['distribution_version'] = version
return True, amazon_facts
def parse_distribution_file_OpenWrt(self, name, data, path, collected_facts):
openwrt_facts = {}
if 'OpenWrt' not in data:
return False, openwrt_facts # TODO: remove
openwrt_facts['distribution'] = name
version = re.search('DISTRIB_RELEASE="(.*)"', data)
if version:
openwrt_facts['distribution_version'] = version.groups()[0]
release = re.search('DISTRIB_CODENAME="(.*)"', data)
if release:
openwrt_facts['distribution_release'] = release.groups()[0]
return True, openwrt_facts
def parse_distribution_file_Alpine(self, name, data, path, collected_facts):
alpine_facts = {}
alpine_facts['distribution'] = 'Alpine'
alpine_facts['distribution_version'] = data
return True, alpine_facts
def parse_distribution_file_SUSE(self, name, data, path, collected_facts):
suse_facts = {}
if 'suse' not in data.lower():
return False, suse_facts # TODO: remove if tested without this
if path == '/etc/os-release':
for line in data.splitlines():
distribution = re.search("^NAME=(.*)", line)
if distribution:
suse_facts['distribution'] = distribution.group(1).strip('"')
# example pattern are 13.04 13.0 13
distribution_version = re.search(r'^VERSION_ID="?([0-9]+\.?[0-9]*)"?', line)
if distribution_version:
suse_facts['distribution_version'] = distribution_version.group(1)
suse_facts['distribution_major_version'] = distribution_version.group(1).split('.')[0]
if 'open' in data.lower():
release = re.search(r'^VERSION_ID="?[0-9]+\.?([0-9]*)"?', line)
if release:
suse_facts['distribution_release'] = release.groups()[0]
elif 'enterprise' in data.lower() and 'VERSION_ID' in line:
# SLES doesn't got funny release names
release = re.search(r'^VERSION_ID="?[0-9]+\.?([0-9]*)"?', line)
if release.group(1):
release = release.group(1)
else:
release = "0" # no minor number, so it is the first release
suse_facts['distribution_release'] = release
# Starting with SLES4SAP12 SP3 NAME reports 'SLES' instead of 'SLES_SAP'
# According to SuSe Support (SR101182877871) we should use the CPE_NAME to detect SLES4SAP
if re.search("^CPE_NAME=.*sles_sap.*$", line):
suse_facts['distribution'] = 'SLES_SAP'
elif path == '/etc/SuSE-release':
if 'open' in data.lower():
data = data.splitlines()
distdata = get_file_content(path).splitlines()[0]
suse_facts['distribution'] = distdata.split()[0]
for line in data:
release = re.search('CODENAME *= *([^\n]+)', line)
if release:
suse_facts['distribution_release'] = release.groups()[0].strip()
elif 'enterprise' in data.lower():
lines = data.splitlines()
distribution = lines[0].split()[0]
if "Server" in data:
suse_facts['distribution'] = "SLES"
elif "Desktop" in data:
suse_facts['distribution'] = "SLED"
for line in lines:
release = re.search('PATCHLEVEL = ([0-9]+)', line) # SLES doesn't got funny release names
if release:
suse_facts['distribution_release'] = release.group(1)
suse_facts['distribution_version'] = collected_facts['distribution_version'] + '.' + release.group(1)
return True, suse_facts
def parse_distribution_file_Debian(self, name, data, path, collected_facts):
debian_facts = {}
if 'Debian' in data or 'Raspbian' in data:
debian_facts['distribution'] = 'Debian'
release = re.search(r"PRETTY_NAME=[^(]+ \(?([^)]+?)\)", data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
# Last resort: try to find release from tzdata as either lsb is missing or this is very old debian
if collected_facts['distribution_release'] == 'NA' and 'Debian' in data:
dpkg_cmd = self.module.get_bin_path('dpkg')
if dpkg_cmd:
cmd = "%s --status tzdata|grep Provides|cut -f2 -d'-'" % dpkg_cmd
rc, out, err = self.module.run_command(cmd)
if rc == 0:
debian_facts['distribution_release'] = out.strip()
elif 'Ubuntu' in data:
debian_facts['distribution'] = 'Ubuntu'
# nothing else to do, Ubuntu gets correct info from python functions
elif 'SteamOS' in data:
debian_facts['distribution'] = 'SteamOS'
# nothing else to do, SteamOS gets correct info from python functions
elif path in ('/etc/lsb-release', '/etc/os-release') and 'Kali' in data:
# Kali does not provide /etc/lsb-release anymore
debian_facts['distribution'] = 'Kali'
release = re.search('DISTRIB_RELEASE=(.*)', data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
elif 'Devuan' in data:
debian_facts['distribution'] = 'Devuan'
release = re.search(r"PRETTY_NAME=\"?[^(\"]+ \(?([^) \"]+)\)?", data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = version.group(1)
elif 'Cumulus' in data:
debian_facts['distribution'] = 'Cumulus Linux'
version = re.search(r"VERSION_ID=(.*)", data)
if version:
major, _minor, _dummy_ver = version.group(1).split(".")
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = major
release = re.search(r'VERSION="(.*)"', data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
elif "Mint" in data:
debian_facts['distribution'] = 'Linux Mint'
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = version.group(1).split('.')[0]
else:
return False, debian_facts
return True, debian_facts
def parse_distribution_file_Mandriva(self, name, data, path, collected_facts):
mandriva_facts = {}
if 'Mandriva' in data:
mandriva_facts['distribution'] = 'Mandriva'
version = re.search('DISTRIB_RELEASE="(.*)"', data)
if version:
mandriva_facts['distribution_version'] = version.groups()[0]
release = re.search('DISTRIB_CODENAME="(.*)"', data)
if release:
mandriva_facts['distribution_release'] = release.groups()[0]
mandriva_facts['distribution'] = name
else:
return False, mandriva_facts
return True, mandriva_facts
def parse_distribution_file_NA(self, name, data, path, collected_facts):
na_facts = {}
for line in data.splitlines():
distribution = re.search("^NAME=(.*)", line)
if distribution and name == 'NA':
na_facts['distribution'] = distribution.group(1).strip('"')
version = re.search("^VERSION=(.*)", line)
if version and collected_facts['distribution_version'] == 'NA':
na_facts['distribution_version'] = version.group(1).strip('"')
return True, na_facts
def parse_distribution_file_Coreos(self, name, data, path, collected_facts):
coreos_facts = {}
# FIXME: pass in ro copy of facts for this kind of thing
distro = get_distribution()
if distro.lower() == 'coreos':
if not data:
# include fix from #15230, #15228
# TODO: verify this is ok for above bugs
return False, coreos_facts
release = re.search("^GROUP=(.*)", data)
if release:
coreos_facts['distribution_release'] = release.group(1).strip('"')
else:
return False, coreos_facts # TODO: remove if tested without this
return True, coreos_facts
def parse_distribution_file_ClearLinux(self, name, data, path, collected_facts):
clear_facts = {}
if "clearlinux" not in name.lower():
return False, clear_facts
pname = re.search('NAME="(.*)"', data)
if pname:
if 'Clear Linux' not in pname.groups()[0]:
return False, clear_facts
clear_facts['distribution'] = pname.groups()[0]
version = re.search('VERSION_ID=(.*)', data)
if version:
clear_facts['distribution_major_version'] = version.groups()[0]
clear_facts['distribution_version'] = version.groups()[0]
release = re.search('ID=(.*)', data)
if release:
clear_facts['distribution_release'] = release.groups()[0]
return True, clear_facts
class Distribution(object):
"""
This subclass of Facts fills the distribution, distribution_version and distribution_release variables
To do so it checks the existence and content of typical files in /etc containing distribution information
This is unit tested. Please extend the tests to cover all distributions if you have them available.
"""
# every distribution name mentioned here, must have one of
# - allowempty == True
# - be listed in SEARCH_STRING
# - have a function get_distribution_DISTNAME implemented
OSDIST_LIST = (
{'path': '/etc/oracle-release', 'name': 'OracleLinux'},
{'path': '/etc/slackware-version', 'name': 'Slackware'},
{'path': '/etc/redhat-release', 'name': 'RedHat'},
{'path': '/etc/vmware-release', 'name': 'VMwareESX', 'allowempty': True},
{'path': '/etc/openwrt_release', 'name': 'OpenWrt'},
{'path': '/etc/system-release', 'name': 'Amazon'},
{'path': '/etc/alpine-release', 'name': 'Alpine'},
{'path': '/etc/arch-release', 'name': 'Archlinux', 'allowempty': True},
{'path': '/etc/os-release', 'name': 'SUSE'},
{'path': '/etc/SuSE-release', 'name': 'SUSE'},
{'path': '/etc/gentoo-release', 'name': 'Gentoo'},
{'path': '/etc/os-release', 'name': 'Debian'},
{'path': '/etc/lsb-release', 'name': 'Mandriva'},
{'path': '/etc/altlinux-release', 'name': 'Altlinux'},
{'path': '/etc/sourcemage-release', 'name': 'SMGL'},
{'path': '/usr/lib/os-release', 'name': 'ClearLinux'},
{'path': '/etc/coreos/update.conf', 'name': 'Coreos'},
{'path': '/etc/os-release', 'name': 'NA'},
)
SEARCH_STRING = {
'OracleLinux': 'Oracle Linux',
'RedHat': 'Red Hat',
'Altlinux': 'ALT Linux',
'ClearLinux': 'Clear Linux Software for Intel Architecture',
'SMGL': 'Source Mage GNU/Linux',
}
# keep keys in sync with Conditionals page of docs
OS_FAMILY_MAP = {'RedHat': ['RedHat', 'Fedora', 'CentOS', 'Scientific', 'SLC',
'Ascendos', 'CloudLinux', 'PSBM', 'OracleLinux', 'OVS',
'OEL', 'Amazon', 'Virtuozzo', 'XenServer', 'Alibaba',
'EulerOS', 'openEuler'],
'Debian': ['Debian', 'Ubuntu', 'Raspbian', 'Neon', 'KDE neon',
'Linux Mint', 'SteamOS', 'Devuan', 'Kali', 'Cumulus Linux',
'Pop!_OS', ],
'Suse': ['SuSE', 'SLES', 'SLED', 'openSUSE', 'openSUSE Tumbleweed',
'SLES_SAP', 'SUSE_LINUX', 'openSUSE Leap'],
'Archlinux': ['Archlinux', 'Antergos', 'Manjaro'],
'Mandrake': ['Mandrake', 'Mandriva'],
'Solaris': ['Solaris', 'Nexenta', 'OmniOS', 'OpenIndiana', 'SmartOS'],
'Slackware': ['Slackware'],
'Altlinux': ['Altlinux'],
'SGML': ['SGML'],
'Gentoo': ['Gentoo', 'Funtoo'],
'Alpine': ['Alpine'],
'AIX': ['AIX'],
'HP-UX': ['HPUX'],
'Darwin': ['MacOSX'],
'FreeBSD': ['FreeBSD', 'TrueOS'],
'ClearLinux': ['Clear Linux OS', 'Clear Linux Mix']}
OS_FAMILY = {}
for family, names in OS_FAMILY_MAP.items():
for name in names:
OS_FAMILY[name] = family
def __init__(self, module):
self.module = module
def get_distribution_facts(self):
distribution_facts = {}
# The platform module provides information about the running
# system/distribution. Use this as a baseline and fix buggy systems
# afterwards
system = platform.system()
distribution_facts['distribution'] = system
distribution_facts['distribution_release'] = platform.release()
distribution_facts['distribution_version'] = platform.version()
systems_implemented = ('AIX', 'HP-UX', 'Darwin', 'FreeBSD', 'OpenBSD', 'SunOS', 'DragonFly', 'NetBSD')
if system in systems_implemented:
cleanedname = system.replace('-', '')
distfunc = getattr(self, 'get_distribution_' + cleanedname)
dist_func_facts = distfunc()
distribution_facts.update(dist_func_facts)
elif system == 'Linux':
distribution_files = DistributionFiles(module=self.module)
# linux_distribution_facts = LinuxDistribution(module).get_distribution_facts()
dist_file_facts = distribution_files.process_dist_files()
distribution_facts.update(dist_file_facts)
distro = distribution_facts['distribution']
# look for a os family alias for the 'distribution', if there isnt one, use 'distribution'
distribution_facts['os_family'] = self.OS_FAMILY.get(distro, None) or distro
return distribution_facts
def get_distribution_AIX(self):
aix_facts = {}
rc, out, err = self.module.run_command("/usr/bin/oslevel")
data = out.split('.')
aix_facts['distribution_major_version'] = data[0]
if len(data) > 1:
aix_facts['distribution_version'] = '%s.%s' % (data[0], data[1])
aix_facts['distribution_release'] = data[1]
else:
aix_facts['distribution_version'] = data[0]
return aix_facts
def get_distribution_HPUX(self):
hpux_facts = {}
rc, out, err = self.module.run_command(r"/usr/sbin/swlist |egrep 'HPUX.*OE.*[AB].[0-9]+\.[0-9]+'", use_unsafe_shell=True)
data = re.search(r'HPUX.*OE.*([AB].[0-9]+\.[0-9]+)\.([0-9]+).*', out)
if data:
hpux_facts['distribution_version'] = data.groups()[0]
hpux_facts['distribution_release'] = data.groups()[1]
return hpux_facts
def get_distribution_Darwin(self):
darwin_facts = {}
darwin_facts['distribution'] = 'MacOSX'
rc, out, err = self.module.run_command("/usr/bin/sw_vers -productVersion")
data = out.split()[-1]
if data:
darwin_facts['distribution_major_version'] = data.split('.')[0]
darwin_facts['distribution_version'] = data
return darwin_facts
def get_distribution_FreeBSD(self):
freebsd_facts = {}
freebsd_facts['distribution_release'] = platform.release()
data = re.search(r'(\d+)\.(\d+)-(RELEASE|STABLE|CURRENT).*', freebsd_facts['distribution_release'])
if 'trueos' in platform.version():
freebsd_facts['distribution'] = 'TrueOS'
if data:
freebsd_facts['distribution_major_version'] = data.group(1)
freebsd_facts['distribution_version'] = '%s.%s' % (data.group(1), data.group(2))
return freebsd_facts
def get_distribution_OpenBSD(self):
openbsd_facts = {}
openbsd_facts['distribution_version'] = platform.release()
rc, out, err = self.module.run_command("/sbin/sysctl -n kern.version")
match = re.match(r'OpenBSD\s[0-9]+.[0-9]+-(\S+)\s.*', out)
if match:
openbsd_facts['distribution_release'] = match.groups()[0]
else:
openbsd_facts['distribution_release'] = 'release'
return openbsd_facts
def get_distribution_DragonFly(self):
return {}
def get_distribution_NetBSD(self):
netbsd_facts = {}
# FIXME: poking at self.facts, should eventually make these each a collector
platform_release = platform.release()
netbsd_facts['distribution_major_version'] = platform_release.split('.')[0]
return netbsd_facts
def get_distribution_SMGL(self):
smgl_facts = {}
smgl_facts['distribution'] = 'Source Mage GNU/Linux'
return smgl_facts
def get_distribution_SunOS(self):
sunos_facts = {}
data = get_file_content('/etc/release').splitlines()[0]
if 'Solaris' in data:
# for solaris 10 uname_r will contain 5.10, for solaris 11 it will have 5.11
uname_r = get_uname(self.module, flags=['-r'])
ora_prefix = ''
if 'Oracle Solaris' in data:
data = data.replace('Oracle ', '')
ora_prefix = 'Oracle '
sunos_facts['distribution'] = data.split()[0]
sunos_facts['distribution_version'] = data.split()[1]
sunos_facts['distribution_release'] = ora_prefix + data
sunos_facts['distribution_major_version'] = uname_r.split('.')[1].rstrip()
return sunos_facts
uname_v = get_uname(self.module, flags=['-v'])
distribution_version = None
if 'SmartOS' in data:
sunos_facts['distribution'] = 'SmartOS'
if _file_exists('/etc/product'):
product_data = dict([l.split(': ', 1) for l in get_file_content('/etc/product').splitlines() if ': ' in l])
if 'Image' in product_data:
distribution_version = product_data.get('Image').split()[-1]
elif 'OpenIndiana' in data:
sunos_facts['distribution'] = 'OpenIndiana'
elif 'OmniOS' in data:
sunos_facts['distribution'] = 'OmniOS'
distribution_version = data.split()[-1]
elif uname_v is not None and 'NexentaOS_' in uname_v:
sunos_facts['distribution'] = 'Nexenta'
distribution_version = data.split()[-1].lstrip('v')
if sunos_facts.get('distribution', '') in ('SmartOS', 'OpenIndiana', 'OmniOS', 'Nexenta'):
sunos_facts['distribution_release'] = data.strip()
if distribution_version is not None:
sunos_facts['distribution_version'] = distribution_version
elif uname_v is not None:
sunos_facts['distribution_version'] = uname_v.splitlines()[0].strip()
return sunos_facts
return sunos_facts
class DistributionFactCollector(BaseFactCollector):
name = 'distribution'
_fact_ids = set(['distribution_version',
'distribution_release',
'distribution_major_version',
'os_family'])
def collect(self, module=None, collected_facts=None):
collected_facts = collected_facts or {}
facts_dict = {}
if not module:
return facts_dict
distribution = Distribution(module=module)
distro_facts = distribution.get_distribution_facts()
return distro_facts
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,516 |
Flatcar Container Linux not properly discovered
|
### SUMMARY
[Flatcar Container Linux](https://www.flatcar-linux.org) is not properly discovered by Ansible, especially while setting hostname.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
hostname
##### ANSIBLE VERSION
```paste below
ansible --version
ansible 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['.../.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.2 (default, Feb 28 2020, 00:00:00) [GCC 10.0.1 20200216 (Red Hat 10.0.1-0.8)]
```
##### OS / ENVIRONMENT
Guest OS: Flatcar Container Linux Alpha 2492.0.0
Host: Fedora 32
##### STEPS TO REPRODUCE
1. Clone [image builder](https://github.com/kubernetes-sigs/image-builder).
2. Adjust files for Flatcar.
3. Run `packer build`, which creates Flatcar image, provisions a host.
##### EXPECTED RESULTS
No error
##### ACTUAL RESULTS
Failure like:
```
flatcar-alpha: TASK [sysprep : Set hostname] **************************************************
flatcar-alpha: fatal: [default]: FAILED! => {"changed": false, "msg": "hostname module cannot be used on platform Linux (Flatcar)"}^
```
For info, in Flatcar you can see for example:
```
$ cat /etc/os-release
NAME="Flatcar Container Linux by Kinvolk"
ID=flatcar
ID_LIKE=coreos
VERSION=2492.0.0
VERSION_ID=2492.0.0
BUILD_ID=2020-04-28-2210
PRETTY_NAME="Flatcar Container Linux by Kinvolk 2492.0.0 (Rhyolite)"
ANSI_COLOR="38;5;75"
HOME_URL="https://flatcar-linux.org/"
BUG_REPORT_URL="https://issues.flatcar-linux.org"
FLATCAR_BOARD="amd64-usr"
$ cat /etc/lsb-release
DISTRIB_ID="Flatcar Container Linux by Kinvolk"
DISTRIB_RELEASE=2492.0.0
DISTRIB_CODENAME="Rhyolite"
DISTRIB_DESCRIPTION="Flatcar Container Linux by Kinvolk 2492.0.0 (Rhyolite)"
```
As far as I understand, so far Ansible only supported [CoreOS Container Linux](https://github.com/ansible/ansible/blob/cedfe34619128783d2a799695bd4c53d6adc9dd1/lib/ansible/module_utils/facts/system/distribution.py#L387), which will be soon EOL. Recently there was an attempt to support [Fedora CoreOS or RedHat CoreOS](https://github.com/ansible/ansible/pull/53563), which was not merged. I am not sure if there is any recent progress about that.
So would it be reasonable to simply add new code for Flatcar, just like CoreOS Container Linux?
|
https://github.com/ansible/ansible/issues/69516
|
https://github.com/ansible/ansible/pull/69627
|
d7f61cbc281f4b8eccf7fe67eea5522cb28b52b2
|
598e3392a9597f0214d68882da4f4ca07314ce41
| 2020-05-14T15:36:51Z |
python
| 2020-06-02T13:11:53Z |
lib/ansible/modules/hostname.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2013, Hiroaki Nakamura <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'core'}
DOCUMENTATION = '''
---
module: hostname
author:
- Adrian Likins (@alikins)
- Hideki Saito (@saito-hideki)
version_added: "1.4"
short_description: Manage hostname
requirements: [ hostname ]
description:
- Set system's hostname, supports most OSs/Distributions, including those using systemd.
- Note, this module does *NOT* modify C(/etc/hosts). You need to modify it yourself using other modules like template or replace.
- Windows, HP-UX and AIX are not currently supported.
options:
name:
description:
- Name of the host
required: true
use:
description:
- Which strategy to use to update the hostname.
- If not set we try to autodetect, but this can be problematic, specially with containers as they can present misleading information.
choices: ['generic', 'debian','sles', 'redhat', 'alpine', 'systemd', 'openrc', 'openbsd', 'solaris', 'freebsd']
version_added: '2.9'
'''
EXAMPLES = '''
- name: Set a hostname
hostname:
name: web01
'''
import os
import platform
import socket
import traceback
from ansible.module_utils.basic import (
AnsibleModule,
get_distribution,
get_distribution_version,
)
from ansible.module_utils.common.sys_info import get_platform_subclass
from ansible.module_utils.facts.system.service_mgr import ServiceMgrFactCollector
from ansible.module_utils._text import to_native
STRATS = {'generic': 'Generic', 'debian': 'Debian', 'sles': 'SLES', 'redhat': 'RedHat', 'alpine': 'Alpine',
'systemd': 'Systemd', 'openrc': 'OpenRC', 'openbsd': 'OpenBSD', 'solaris': 'Solaris', 'freebsd': 'FreeBSD'}
class UnimplementedStrategy(object):
def __init__(self, module):
self.module = module
def update_current_and_permanent_hostname(self):
self.unimplemented_error()
def update_current_hostname(self):
self.unimplemented_error()
def update_permanent_hostname(self):
self.unimplemented_error()
def get_current_hostname(self):
self.unimplemented_error()
def set_current_hostname(self, name):
self.unimplemented_error()
def get_permanent_hostname(self):
self.unimplemented_error()
def set_permanent_hostname(self, name):
self.unimplemented_error()
def unimplemented_error(self):
system = platform.system()
distribution = get_distribution()
if distribution is not None:
msg_platform = '%s (%s)' % (system, distribution)
else:
msg_platform = system
self.module.fail_json(
msg='hostname module cannot be used on platform %s' % msg_platform)
class Hostname(object):
"""
This is a generic Hostname manipulation class that is subclassed
based on platform.
A subclass may wish to set different strategy instance to self.strategy.
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None
strategy_class = UnimplementedStrategy
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(Hostname)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.name = module.params['name']
self.use = module.params['use']
if self.use is not None:
strat = globals()['%sStrategy' % STRATS[self.use]]
self.strategy = strat(module)
elif self.platform == 'Linux' and ServiceMgrFactCollector.is_systemd_managed(module):
self.strategy = SystemdStrategy(module)
else:
self.strategy = self.strategy_class(module)
def update_current_and_permanent_hostname(self):
return self.strategy.update_current_and_permanent_hostname()
def get_current_hostname(self):
return self.strategy.get_current_hostname()
def set_current_hostname(self, name):
self.strategy.set_current_hostname(name)
def get_permanent_hostname(self):
return self.strategy.get_permanent_hostname()
def set_permanent_hostname(self, name):
self.strategy.set_permanent_hostname(name)
class GenericStrategy(object):
"""
This is a generic Hostname manipulation strategy class.
A subclass may wish to override some or all of these methods.
- get_current_hostname()
- get_permanent_hostname()
- set_current_hostname(name)
- set_permanent_hostname(name)
"""
def __init__(self, module):
self.module = module
self.changed = False
self.hostname_cmd = self.module.get_bin_path('hostnamectl', False)
if not self.hostname_cmd:
self.hostname_cmd = self.module.get_bin_path('hostname', True)
def update_current_and_permanent_hostname(self):
self.update_current_hostname()
self.update_permanent_hostname()
return self.changed
def update_current_hostname(self):
name = self.module.params['name']
current_name = self.get_current_hostname()
if current_name != name:
if not self.module.check_mode:
self.set_current_hostname(name)
self.changed = True
def update_permanent_hostname(self):
name = self.module.params['name']
permanent_name = self.get_permanent_hostname()
if permanent_name != name:
if not self.module.check_mode:
self.set_permanent_hostname(name)
self.changed = True
def get_current_hostname(self):
cmd = [self.hostname_cmd]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
return 'UNKNOWN'
def set_permanent_hostname(self, name):
pass
class DebianStrategy(GenericStrategy):
"""
This is a Debian family Hostname manipulation strategy class - it edits
the /etc/hostname file.
"""
HOSTNAME_FILE = '/etc/hostname'
def get_permanent_hostname(self):
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
f = open(self.HOSTNAME_FILE)
try:
return f.read().strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
f = open(self.HOSTNAME_FILE, 'w+')
try:
f.write("%s\n" % name)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
class SLESStrategy(GenericStrategy):
"""
This is a SLES Hostname strategy class - it edits the
/etc/HOSTNAME file.
"""
HOSTNAME_FILE = '/etc/HOSTNAME'
def get_permanent_hostname(self):
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
f = open(self.HOSTNAME_FILE)
try:
return f.read().strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
f = open(self.HOSTNAME_FILE, 'w+')
try:
f.write("%s\n" % name)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
class RedHatStrategy(GenericStrategy):
"""
This is a Redhat Hostname strategy class - it edits the
/etc/sysconfig/network file.
"""
NETWORK_FILE = '/etc/sysconfig/network'
def get_permanent_hostname(self):
try:
f = open(self.NETWORK_FILE, 'rb')
try:
for line in f.readlines():
if line.startswith('HOSTNAME'):
k, v = line.split('=')
return v.strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
lines = []
found = False
f = open(self.NETWORK_FILE, 'rb')
try:
for line in f.readlines():
if line.startswith('HOSTNAME'):
lines.append("HOSTNAME=%s\n" % name)
found = True
else:
lines.append(line)
finally:
f.close()
if not found:
lines.append("HOSTNAME=%s\n" % name)
f = open(self.NETWORK_FILE, 'w+')
try:
f.writelines(lines)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
class AlpineStrategy(GenericStrategy):
"""
This is a Alpine Linux Hostname manipulation strategy class - it edits
the /etc/hostname file then run hostname -F /etc/hostname.
"""
HOSTNAME_FILE = '/etc/hostname'
def update_current_and_permanent_hostname(self):
self.update_permanent_hostname()
self.update_current_hostname()
return self.changed
def get_permanent_hostname(self):
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
f = open(self.HOSTNAME_FILE)
try:
return f.read().strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
f = open(self.HOSTNAME_FILE, 'w+')
try:
f.write("%s\n" % name)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_current_hostname(self, name):
cmd = [self.hostname_cmd, '-F', self.HOSTNAME_FILE]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class SystemdStrategy(GenericStrategy):
"""
This is a Systemd hostname manipulation strategy class - it uses
the hostnamectl command.
"""
def get_current_hostname(self):
cmd = [self.hostname_cmd, '--transient', 'status']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
if len(name) > 64:
self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name")
cmd = [self.hostname_cmd, '--transient', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
cmd = [self.hostname_cmd, '--static', 'status']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
if len(name) > 64:
self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name")
cmd = [self.hostname_cmd, '--pretty', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
cmd = [self.hostname_cmd, '--static', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class OpenRCStrategy(GenericStrategy):
"""
This is a Gentoo (OpenRC) Hostname manipulation strategy class - it edits
the /etc/conf.d/hostname file.
"""
HOSTNAME_FILE = '/etc/conf.d/hostname'
def get_permanent_hostname(self):
name = 'UNKNOWN'
try:
try:
f = open(self.HOSTNAME_FILE, 'r')
for line in f:
line = line.strip()
if line.startswith('hostname='):
name = line[10:].strip('"')
break
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
finally:
f.close()
return name
def set_permanent_hostname(self, name):
try:
try:
f = open(self.HOSTNAME_FILE, 'r')
lines = [x.strip() for x in f]
for i, line in enumerate(lines):
if line.startswith('hostname='):
lines[i] = 'hostname="%s"' % name
break
f.close()
f = open(self.HOSTNAME_FILE, 'w')
f.write('\n'.join(lines) + '\n')
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
finally:
f.close()
class OpenBSDStrategy(GenericStrategy):
"""
This is a OpenBSD family Hostname manipulation strategy class - it edits
the /etc/myname file.
"""
HOSTNAME_FILE = '/etc/myname'
def get_permanent_hostname(self):
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
f = open(self.HOSTNAME_FILE)
try:
return f.read().strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
f = open(self.HOSTNAME_FILE, 'w+')
try:
f.write("%s\n" % name)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
class SolarisStrategy(GenericStrategy):
"""
This is a Solaris11 or later Hostname manipulation strategy class - it
execute hostname command.
"""
def set_current_hostname(self, name):
cmd_option = '-t'
cmd = [self.hostname_cmd, cmd_option, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
fmri = 'svc:/system/identity:node'
pattern = 'config/nodename'
cmd = '/usr/sbin/svccfg -s %s listprop -o value %s' % (fmri, pattern)
rc, out, err = self.module.run_command(cmd, use_unsafe_shell=True)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class FreeBSDStrategy(GenericStrategy):
"""
This is a FreeBSD hostname manipulation strategy class - it edits
the /etc/rc.conf.d/hostname file.
"""
HOSTNAME_FILE = '/etc/rc.conf.d/hostname'
def get_permanent_hostname(self):
name = 'UNKNOWN'
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("hostname=temporarystub\n")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
try:
f = open(self.HOSTNAME_FILE, 'r')
for line in f:
line = line.strip()
if line.startswith('hostname='):
name = line[10:].strip('"')
break
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
finally:
f.close()
return name
def set_permanent_hostname(self, name):
try:
try:
f = open(self.HOSTNAME_FILE, 'r')
lines = [x.strip() for x in f]
for i, line in enumerate(lines):
if line.startswith('hostname='):
lines[i] = 'hostname="%s"' % name
break
f.close()
f = open(self.HOSTNAME_FILE, 'w')
f.write('\n'.join(lines) + '\n')
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
finally:
f.close()
class FedoraHostname(Hostname):
platform = 'Linux'
distribution = 'Fedora'
strategy_class = SystemdStrategy
class SLESHostname(Hostname):
platform = 'Linux'
distribution = 'Sles'
try:
distribution_version = get_distribution_version()
# cast to float may raise ValueError on non SLES, we use float for a little more safety over int
if distribution_version and 10 <= float(distribution_version) <= 12:
strategy_class = SLESStrategy
else:
raise ValueError()
except ValueError:
strategy_class = UnimplementedStrategy
class OpenSUSEHostname(Hostname):
platform = 'Linux'
distribution = 'Opensuse'
strategy_class = SystemdStrategy
class OpenSUSELeapHostname(Hostname):
platform = 'Linux'
distribution = 'Opensuse-leap'
strategy_class = SystemdStrategy
class OpenSUSETumbleweedHostname(Hostname):
platform = 'Linux'
distribution = 'Opensuse-tumbleweed'
strategy_class = SystemdStrategy
class AsteraHostname(Hostname):
platform = 'Linux'
distribution = '"astralinuxce"'
strategy_class = SystemdStrategy
class ArchHostname(Hostname):
platform = 'Linux'
distribution = 'Arch'
strategy_class = SystemdStrategy
class ArchARMHostname(Hostname):
platform = 'Linux'
distribution = 'Archarm'
strategy_class = SystemdStrategy
class ManjaroHostname(Hostname):
platform = 'Linux'
distribution = 'Manjaro'
strategy_class = SystemdStrategy
class RHELHostname(Hostname):
platform = 'Linux'
distribution = 'Redhat'
strategy_class = RedHatStrategy
class CentOSHostname(Hostname):
platform = 'Linux'
distribution = 'Centos'
strategy_class = RedHatStrategy
class ClearLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Clear-linux-os'
strategy_class = SystemdStrategy
class CloudlinuxserverHostname(Hostname):
platform = 'Linux'
distribution = 'Cloudlinuxserver'
strategy_class = RedHatStrategy
class CloudlinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Cloudlinux'
strategy_class = RedHatStrategy
class CoreosHostname(Hostname):
platform = 'Linux'
distribution = 'Coreos'
strategy_class = SystemdStrategy
class ScientificHostname(Hostname):
platform = 'Linux'
distribution = 'Scientific'
strategy_class = RedHatStrategy
class OracleLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Oracle'
strategy_class = RedHatStrategy
class VirtuozzoLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Virtuozzo'
strategy_class = RedHatStrategy
class AmazonLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Amazon'
strategy_class = RedHatStrategy
class DebianHostname(Hostname):
platform = 'Linux'
distribution = 'Debian'
strategy_class = DebianStrategy
class KylinHostname(Hostname):
platform = 'Linux'
distribution = 'Kylin'
strategy_class = DebianStrategy
class CumulusHostname(Hostname):
platform = 'Linux'
distribution = 'Cumulus-linux'
strategy_class = DebianStrategy
class KaliHostname(Hostname):
platform = 'Linux'
distribution = 'Kali'
strategy_class = DebianStrategy
class UbuntuHostname(Hostname):
platform = 'Linux'
distribution = 'Ubuntu'
strategy_class = DebianStrategy
class LinuxmintHostname(Hostname):
platform = 'Linux'
distribution = 'Linuxmint'
strategy_class = DebianStrategy
class LinaroHostname(Hostname):
platform = 'Linux'
distribution = 'Linaro'
strategy_class = DebianStrategy
class DevuanHostname(Hostname):
platform = 'Linux'
distribution = 'Devuan'
strategy_class = DebianStrategy
class RaspbianHostname(Hostname):
platform = 'Linux'
distribution = 'Raspbian'
strategy_class = DebianStrategy
class GentooHostname(Hostname):
platform = 'Linux'
distribution = 'Gentoo'
strategy_class = OpenRCStrategy
class ALTLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Altlinux'
strategy_class = RedHatStrategy
class AlpineLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Alpine'
strategy_class = AlpineStrategy
class OpenBSDHostname(Hostname):
platform = 'OpenBSD'
distribution = None
strategy_class = OpenBSDStrategy
class SolarisHostname(Hostname):
platform = 'SunOS'
distribution = None
strategy_class = SolarisStrategy
class FreeBSDHostname(Hostname):
platform = 'FreeBSD'
distribution = None
strategy_class = FreeBSDStrategy
class NetBSDHostname(Hostname):
platform = 'NetBSD'
distribution = None
strategy_class = FreeBSDStrategy
class NeonHostname(Hostname):
platform = 'Linux'
distribution = 'Neon'
strategy_class = DebianStrategy
class OsmcHostname(Hostname):
platform = 'Linux'
distribution = 'Osmc'
strategy_class = SystemdStrategy
class VoidLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Void'
strategy_class = DebianStrategy
class PopHostname(Hostname):
platform = 'Linux'
distribution = 'Pop'
strategy_class = DebianStrategy
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(type='str', required=True),
use=dict(type='str', choices=STRATS.keys())
),
supports_check_mode=True,
)
hostname = Hostname(module)
name = module.params['name']
current_hostname = hostname.get_current_hostname()
permanent_hostname = hostname.get_permanent_hostname()
changed = hostname.update_current_and_permanent_hostname()
if name != current_hostname:
name_before = current_hostname
elif name != permanent_hostname:
name_before = permanent_hostname
kw = dict(changed=changed, name=name,
ansible_facts=dict(ansible_hostname=name.split('.')[0],
ansible_nodename=name,
ansible_fqdn=socket.getfqdn(),
ansible_domain='.'.join(socket.getfqdn().split('.')[1:])))
if changed:
kw['diff'] = {'after': 'hostname = ' + name + '\n',
'before': 'hostname = ' + name_before + '\n'}
module.exit_json(**kw)
if __name__ == '__main__':
main()
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.